How We Score: The EWEAR Methodology Explained
Every article on the EVERYWEAR dashboard carries an EWEAR score from 0 to 100. This score determines what surfaces, how it is ranked, and which stories make it into our category pages and weekly highlights. Here is exactly how the system works, why we built it this way, and what it means for the content you see.
The Five Scoring Dimensions
Every article that enters the EWEAR system is scored across five dimensions. The total maximum score is 100 points.
1. Relevance (0-30 points)
How relevant is this article to wearable technology? This is the highest-weighted dimension because it is the most important filter. An article that scores 0 on Relevance is excluded entirely, regardless of how well it performs on other dimensions. Relevance is assessed through keyword matching against a curated dictionary of wearable-specific terms, product names, brand names, and technology categories. Articles about general tech, smartphones, or non-wearable devices score lower. Articles specifically about smartwatches, fitness trackers, smart rings, hearables, AR/VR headsets, or health wearables score higher.
2. Freshness (0-25 points)
How recent is this article? Freshness decays over time. An article published in the last 6 hours scores maximum points. Articles from 6-24 hours ago score well. Anything older than 48 hours scores significantly lower. This ensures the dashboard always prioritises breaking news and recent developments over older content, while still allowing strong evergreen pieces to surface if their other scores are high enough.
3. Source Authority (0-20 points)
How trustworthy is the source? Each of our 34 data sources is assigned an authority tier based on editorial standards, fact-checking reputation, and domain expertise. Tier 1 sources (The Verge, Ars Technica, Wired) score highest. Tier 2 sources (specialised tech blogs, manufacturer newsrooms) score moderately. Tier 3 sources (aggregators, smaller publications) score lower. This is not a judgement on quality — a Tier 3 source can still break important news — but it reflects general editorial reliability.
4. Brand Signal (0-15 points)
Does this article mention specific wearable brands or products? Brand Signal rewards articles that cover specific products or companies that our audience cares about. We track 21 brands across the wearable ecosystem, and articles that mention tracked brands score higher. This helps surface product-specific news — reviews, launches, updates, price changes — over general industry commentary. An article about "the future of wearables" scores lower on Brand Signal than an article about "the new Apple Watch Ultra 3 features."
5. Depth (0-10 points)
Is this a substantive piece of content? Depth rewards longer, more detailed articles over brief news snippets. It considers content length, the presence of structured data (specifications, comparisons, multiple product mentions), and whether the article provides analysis beyond surface-level reporting. A 2,000-word in-depth review scores higher than a 200-word news brief. This ensures that the most informative content gets visibility, even if it is from a smaller source.
The Wearable Context Gate
Before any article enters the scoring pipeline, it must pass the wearable context gate. This is a binary filter that determines whether an article is about wearable technology at all. The gate checks for the presence of wearable-specific terms in the title and description, and cross-references against a curated list of wearable product names, brand names, and technology categories.
Articles that fail the context gate are excluded entirely. This is essential because many of our sources — like The Verge or TechCrunch — cover all of technology, not just wearables. Without the context gate, the dashboard would be overwhelmed with smartphone, laptop, and AI articles that happen to come from the same sources. The gate ensures that only genuinely wearable-relevant content enters the scoring system.
Our 34 Data Sources
EVERYWEAR pulls from 34 curated RSS feeds and data sources across the technology media landscape. These sources are selected for their coverage of wearable technology and include:
Sources are reviewed periodically. If a source consistently produces low-relevance content or goes inactive, it is replaced. If a new source emerges that covers wearable tech with quality and consistency, it is added.
The 8 Categories
Every article that passes the context gate and receives a score is assigned to one of eight categories. These categories power the dashboard navigation and category pages:
- Smartwatches — Apple Watch, Samsung Galaxy Watch, Google Pixel Watch, Garmin, and all smartwatch-related coverage.
- Fitness Trackers — Fitbit, Xiaomi, Amazfit, Garmin fitness bands, and fitness-focused wearables.
- AR/VR Wearables — Apple Vision Pro, Meta Quest, smart glasses, mixed reality headsets.
- Health Devices — Medical wearables, continuous glucose monitors, hearing aids, clinical-grade devices.
- Hearables — Wireless earbuds, smart headphones, hearing enhancement devices.
- Smart Rings — Oura, Samsung Galaxy Ring, RingConn, Ultrahuman, and other ring-form wearables.
- Smart Clothing — Sensor-equipped garments, smart shoes, haptic vests, and textile-based wearables.
- General Wearable Tech — Cross-category news, industry trends, market analysis, and anything that does not fit a specific category.
The 21 Tracked Brands
EWEAR tracks 21 brands across the wearable ecosystem. When an article mentions a tracked brand, it receives a Brand Signal boost and the article is tagged accordingly. The tracked brands are:
- Smartwatches: Apple, Samsung, Google, Garmin, Amazfit, Huawei, OnePlus, Mobvoi
- Fitness & Health: Fitbit, WHOOP, Oura, Xiaomi, COROS, Polar, Suunto
- Hearables: Sony, Bose, Jabra, Nothing
- AR/VR: Meta, Apple (Vision Pro)
How Articles Get Filtered
The EWEAR pipeline processes articles through several stages:
- Ingestion. RSS feeds from all 34 sources are polled at regular intervals. New articles are collected with their title, description, publication date, source, and URL.
- Deduplication. Duplicate articles (same story covered by multiple sources) are identified and consolidated. The highest-authority source version is kept as the primary, with other sources noted.
- Context gate. Each article is checked for wearable relevance. Non-wearable articles are excluded.
- Scoring. Articles that pass the gate are scored across all five dimensions. The total EWEAR score (0-100) determines ranking.
- Categorisation. Each scored article is assigned to a category based on content analysis and brand mentions.
- Display. Articles are surfaced on the dashboard, category pages, and weekly highlights based on their EWEAR score and recency.
Why Rule-Based, Not AI Scoring
A common question: why does EWEAR use a rule-based scoring system rather than an AI/ML model?
The decision is deliberate. Rule-based scoring provides:
- Transparency. Every score can be explained. You can see exactly why an article scored 78 — because the rules are deterministic. An ML model would be a black box.
- Consistency. The same article will always receive the same score (given the same freshness). There is no variance from model drift, fine-tuning changes, or prompt sensitivity.
- Speed. Rule-based scoring runs in milliseconds per article. No API calls, no inference latency, no cost per request. This matters when processing hundreds of articles daily.
- Auditability. If a score seems wrong, we can trace exactly which rules contributed and adjust. With an ML model, debugging individual scores is significantly harder.
- No bias amplification. ML models trained on engagement data tend to amplify controversy and sensationalism. Our rule-based approach rewards relevance, authority, and depth instead.
This does not mean AI has no role at EVERYWEAR. Our blog guides use AI assistance for research and writing (always disclosed). But the scoring system that determines what content surfaces on the dashboard is entirely rule-based and deterministic.
Score Interpretation Guide
What EWEAR Scores Mean
Articles scoring below 40 are typically not displayed on the main dashboard, though they may appear in category-specific pages if that category has limited recent coverage. The weekly highlights feature only surfaces articles that scored 65 or above during the week.
Limitations and Honesty
No scoring system is perfect. Here are the known limitations of EWEAR:
- Keyword dependency. The context gate and relevance scoring rely on keyword matching, which means articles using unusual terminology or covering emerging categories may be underscored initially.
- English only. EWEAR currently processes English-language sources only. Important wearable news from non-English markets (particularly Asia) may be missed until covered by English-language outlets.
- Source bias. The 34-source list inevitably reflects a particular slice of the media landscape. We review and update sources regularly, but coverage gaps exist.
- No sentiment analysis. EWEAR scores what an article is about, not what it says. A strongly negative review and a glowing review of the same product may receive similar scores.
The Philosophy
EWEAR exists to solve a simple problem: there is too much wearable tech news, and most people do not have time to read all of it. The scoring system surfaces the most relevant, fresh, authoritative, and substantive content so you can stay informed without drowning in noise. It is opinionated by design — the weights reflect our view of what matters most — but it is transparent about how those opinions translate into scores. For more about EVERYWEAR and its mission, visit our about page.