How we score. Fully transparent.
Every BetterScore is calculated from 8 weighted dimensions. The methodology is public, the data sources are listed, and the algorithm is audited annually by an independent third party.
The 8 Dimensions Explained
Feature Depth & Coverage
Dimension #1Does it do what it claims? How complete is the feature set vs. the category benchmark?
Covers 90%+ of core category features. Unique capabilities that competitors lack.
Missing core category features. Requires workarounds or add-ons for basic tasks.
Value for Money
Dimension #2Price per user vs. feature set vs. category average. Are you paying a fair price?
Below-average price for above-average features. Clear pricing, no hidden costs.
Expensive relative to feature set. Hidden costs, aggressive upselling.
Customer Support Quality
Dimension #3Response time, resolution rate, support channel availability, CSAT scores.
Fast response times, multiple channels, high CSAT scores, proactive support.
Slow responses, limited channels, poor resolution rates, no SLA.
Onboarding & Ease of Use
Dimension #4Time-to-value, UI clarity, documentation quality, onboarding experience.
Fast onboarding, intuitive UI, excellent documentation, quick time-to-value.
Steep learning curve, confusing UI, poor documentation, long time-to-value.
Reliability & Uptime
Dimension #5Verified uptime data, incident history, SLA adherence from public status pages.
99.95%+ uptime, few incidents, fast resolution, strong SLA adherence.
Below 99.9% uptime, frequent incidents, slow resolution.
Vendor Transparency
Dimension #6Honest pricing pages, no bait-and-switch, no dark patterns, public roadmap.
Public pricing, no dark patterns, public roadmap, honest marketing.
"Contact sales" pricing, dark patterns, no public roadmap.
Integration Ecosystem
Dimension #7Number of native integrations, API quality, Zapier/Make coverage.
Many native integrations, strong API, excellent Zapier/Make coverage.
Few native integrations, poor API documentation, limited automation.
User Sentiment (Verified)
Dimension #8Only from verified, non-incentivised reviews. Recency-weighted.
Consistently positive verified reviews. High satisfaction across company sizes.
Negative reviews from verified users. Common complaints across segments.
Data Sources
Public status pages
Uptime and incident data from vendor status pages, monitored continuously.
Verified buyer reviews
Multi-step verified reviews — work email, LinkedIn, OAuth usage signals.
Independent feature testing
Our research team tests products directly, documenting features and UX.
Pricing page scrapes
Automated monitoring of vendor pricing pages, verified quarterly by hand.
Support channel testing
We submit real support tickets and measure response times and quality.
API & integration audits
We test APIs, review documentation quality, and count native integrations.
What we do NOT do
Dispute Process
If a vendor believes their BetterScore is inaccurate, they can submit an evidence-based dispute. We take accuracy seriously and have a formal process to address legitimate concerns.
Submit evidence
Vendor submits specific, factual evidence about why a dimension score is incorrect.
Score frozen
The current score is frozen (not removed) while the dispute is under review. Visible to buyers.
Independent panel
A 3-person panel (1 BetterSaaS analyst + 2 external reviewers) evaluates the evidence.
Decision published
The panel's decision is final and published publicly, regardless of outcome.
Annual Third-Party Audit
Every year, an independent firm audits the BetterScore formula for bias, conflicts of interest, and methodological soundness. The full audit report is published publicly — no redactions.
Methodology Changelog
Increased "Vendor Transparency" weight from 6% to 8%. Reduced "Integration Ecosystem" from 9% to 7%. Reason: buyer feedback showed pricing transparency is a top-3 concern.
Added OAuth-based usage signals as a verification method for reviews. This strengthens the "Verified" label.
Initial methodology published. 8 dimensions established with weights based on buyer survey (n=2,400).
Frequently Asked Questions
Why is user sentiment only 5%?
Sentiment is the most gameable dimension. Vendors run incentivised review campaigns, offer gift cards for 5-stars, and flood platforms with low-quality reviews. We give more weight to things vendors cannot fake: uptime data, pricing transparency, and independently tested feature depth.
How often are scores updated?
Scores are recalculated weekly with the latest data. Major scoring reviews happen quarterly, and the full methodology undergoes an annual third-party audit.
Can a vendor challenge their score?
Yes. Vendors can submit evidence-based disputes through the dispute process. The score is frozen (not removed) while an independent panel reviews the evidence. The panel's decision is final and published publicly.
Why don't you rank [niche tool]?
We're adding new products and categories every week. If you can't find a tool, submit a request through our contact form and we'll prioritize it based on buyer demand.
Does the Vendor Verified badge affect scores?
Absolutely not. Vendor Verified ($99/mo) only gives vendors the ability to respond to reviews, display a verified badge, and access anonymised feedback data. It has zero influence on the BetterScore calculation. This is clearly disclosed on every product page.
How do you prevent conflicts of interest?
Revenue from vendors (Vendor Verified) is capped at 10% of total revenue. Our primary revenue comes from buyer subscriptions (Pro, Team, Enterprise). The scoring algorithm is audited annually by an independent third party, and the report is published publicly.