Word & Char Counter
Count words, characters, sentences & reading time.
What does this counter measure?
The counter reports six core metrics on every keystroke: characters with spaces, characters without spaces, words, sentences, paragraphs, reading time at 200 wpm, and speaking time at 130 wpm. The reading-time figure assumes silent adult nonfiction prose; the speaking figure assumes a conversational delivery pace. An optional second layer surfaces top keyword density with a stopword filter, the Flesch Reading Ease score, and the Flesch-Kincaid Grade Level.
Worked example: a 500-word article counted by our tool resolves to roughly 2,800 characters with spaces, 28 sentences, 6 paragraphs, a 2.5-minute read, and a 3.8-minute spoken delivery. If the same article scores Flesch Reading Ease 65, the readability layer labels it "plain English / 8th-grade reading level" and the F-KG result lands near 8. That is the band most B2C blog posts target.
How readability scores work
Flesch Reading Ease is the 1948 Rudolf Flesch formula 206.835 - 1.015 * ASL - 84.6 * ASW, where ASL is the average sentence length in words and ASW is the average number of syllables per word. Higher scores mean easier prose. A score in the 60-70 band corresponds to a standard adult reader (8th-to-9th-grade level), 50-60 reads as fairly difficult, and anything below 30 is academic-grade dense. Flesch-Kincaid Grade Level is a sister formula that converts the same two inputs into a US school grade, so a F-KG of 8 means an average 8th-grade student can read the text without stalling.
Worked example: a New Yorker-style paragraph with long sentences and Latinate vocabulary scores around FRE 50 and F-KG 12. A tabloid headline written for skim-readers lands closer to FRE 80 and F-KG 5. The two scores do not always agree at the edges, but for most prose in the 40-80 range they track tightly. Editorial stance: technical writing should aim for roughly FRE 50, since precision and domain vocabulary outrank breeziness. B2C marketing copy should aim for roughly FRE 70, where sentences are short enough for mobile screens and word choice does not slow the reader down. The exact target is less important than knowing which band the audience reads in.
Common pitfalls
Word counting and readability scoring look obvious until they meet the messy edges of real text. The four pitfalls below are the ones we see most often when content writers compare numbers across tools, when copy hits a platform character limit, or when readability scores get treated as targets instead of diagnostics. Each one stems from treating a default value as universal or from applying English-only logic to text that does not fit the model.
- Treating "200 words/minute" as a universal reading rate. It is a nonfiction-prose default for adult silent reading. Technical and academic reading drops to roughly 150 wpm because readers backtrack and chew on definitions. Light fiction climbs to 250-300 wpm because the prose carries fewer surprises per sentence. A reading-time estimate that does not name its assumed pace is hiding a 2x range.
- Comparing the raw character count to "Twitter's 280-character limit" without knowing how Twitter actually counts. Twitter applies special weighting: every URL counts as 23 characters regardless of its real length because t.co wraps the link, and most emoji count as 2 because they ship as UTF-16 surrogate pairs. A post that fits 280 in our counter can fail Twitter validation, and a post that looks 30 characters too long here can sail through if half the overage is a URL.
- Optimizing the readability score in isolation. Flesch-Kincaid rewards short sentences and short words, and the gradient pushes any text toward those two attributes whether or not the meaning survives. Pushed past the audience's comfort band, the output reads as choppy and infantile prose that wastes the reader's time. We treat the score as a sanity check, not a target.
- Counting words in CJK languages with whitespace-split logic. Chinese, Japanese, and Korean do not separate words with spaces, so a regex that splits on
/\s+/returns 1 for an entire paragraph of CJK text. Our counter detects CJK input and falls back to character count, which is the conventional substitute for "word count" in those languages. Readability scores stay disabled for non-English input because the formulas are calibrated for English syllable patterns.
When to use this tool
We built the counter for three concrete workflows. The first is a content writer trimming a blog post to fit under the 155-character SEO meta description limit, where the description has to land in a tight window or Google will rewrite it from page text. Pasting the draft and watching the character count update on every edit removes the guesswork. The second is a student hitting a 1,500-word essay target without padding: the live word count plus the sentence and paragraph metrics flag whether the draft has reached length through substance or through filler. The third is a copywriter testing a Twitter thread against the platform's 280-per-tweet rule, with the 23-character URL adjustment in mind. We surface the raw count here, and the writer applies the URL credit per tweet to see whether each one actually fits.
Frequently asked
- How do you count a word?
- We split on runs of whitespace using the regex `/\s+/`. Hyphenated terms like 'state-of-the-art' count as 1 word in our tool, matching most browser implementations. Microsoft Word counts the same string as 4 because it treats hyphens as word breaks. Contractions like 'don't' count as 1 word. Multiple spaces collapse to a single boundary, and a trailing space at the end of input does not inflate the count.
- Is the reading time accurate?
- We default to 200 wpm for adult nonfiction prose. Silent reading averages 200-250 wpm across most genres; speaking averages 130 wpm. Technical reading drops to roughly 150 wpm; light fiction climbs to 250-300 wpm. We treat the number as a starting estimate, not a precise measure for any specific reader. A 1,000-word article lands at 5 minutes at 200 wpm, closer to 7 if it's dense technical material.
- Why do my Twitter character counts differ here?
- Twitter applies special weighting rules. URLs always count as 23 characters regardless of actual length because t.co wraps every link. Emoji count as 2 (UTF-16 surrogate pairs). CJK characters count as 2. Our counter shows raw character count, which is the right number for SEO meta descriptions, essays, and most platforms. For exact Twitter compliance we recommend the official `twitter-text` library, which encodes the full weighting table.
- What's a good Flesch Reading Ease score?
- It depends on the audience. 60-70 covers most B2C content for a standard adult reader and corresponds to roughly an 8th-grade reading level. 50-60 is fairly difficult (technical or professional). 70-80 is fairly easy (popular fiction, casual blog posts). 30-50 is academic-grade difficult. We aim for the middle of the target audience's range and resist optimizing blindly toward higher numbers, which often signals oversimplification rather than clarity.
- Does this work for non-English text?
- Partially. Word counts work for any language with whitespace word boundaries: most European languages, Indonesian, and Vietnamese all split cleanly. Reading time still uses the 200 wpm assumption, which is less accurate for languages with shorter or longer mean word lengths. Readability scores like Flesch Reading Ease are calibrated for English and produce meaningless numbers in other languages. CJK input (Chinese, Japanese, Korean) falls back to character count because there are no word-internal spaces.
- Should I always optimize for higher readability?
- No. Higher readability favors short sentences and short words. Pushed too far, the result is choppy, infantile prose that wastes the reader's time. We use the score as a sanity check ('am I writing too dense for this audience?') rather than a target to maximize. Some genres reward complexity: legal, academic, and scientific writing trade ease for precision, and over-simplifying breaks the contract with the reader.