How We Test Tools

Our methodology for reviewing and scoring every AI tool on Seller Stacked

Every tool on Seller Stacked is tested by a real e-commerce operator -- not a freelance writer working from a press release. Mark Dunne has run e-commerce businesses for 11+ years and uses many of these tools daily. This page explains exactly how we evaluate, score, and recommend tools so you can trust our reviews.

Who Tests the Tools?

All reviews are written by Mark Dunne, founder of Supplements Wise and an active Amazon FBA and Shopify seller. Every tool is evaluated from the perspective of a seller who needs it to work in a real business -- not from a demo walkthrough. Where a tool requires domain expertise (e.g., advertising platforms or SEO suites), we consult with practitioners who use these tools professionally.

What Is Our Testing Process?

1

Sign up and onboard

We create a real account on every tool using either a free trial or a paid plan. We go through the full onboarding flow as a new user and note any friction, setup time, or learning curve.

2

Test core features

We test each tool's core features using real product data -- actual Amazon listings, live Shopify stores, and real ad campaigns. We document what works, what breaks, and what the output quality looks like.

3

Compare against alternatives

No tool exists in a vacuum. We compare outputs side-by-side with competing tools in the same category -- running the same prompts, same data, same use cases -- so you can see how they stack up.

4

Score and write the review

After testing, we assign scores across five dimensions (see below) and write the review. We include specific examples, feature verdicts, and clear recommendations for who should and should not use the tool.

How Do We Score Tools?

Every tool receives a score from 0 to 10 across five dimensions. The overall score is a weighted average based on what matters most to e-commerce sellers.

DimensionWeightWhat We Evaluate
Ease of Use20%Onboarding time, learning curve, UI clarity, documentation quality
Value for Money25%Price relative to output quality, feature limits on lower tiers, ROI potential
Shopify Integration20%Native Shopify app availability, data sync quality, workflow fit for DTC sellers
Amazon Features20%Amazon-specific capabilities, listing optimization, keyword data accuracy
Overall15%General quality, reliability, support responsiveness, update frequency

What Do Feature Verdicts Mean?

In expanded reviews, we test specific features and assign one of five verdicts:

Best in category

This feature outperforms all competing tools we tested.

Works well

Solid performance. Does what it promises without major issues.

Acceptable

Gets the job done but has noticeable limitations or rough edges.

Needs work

Below expectations. May improve in future updates but not reliable today.

What Are Our Editorial Standards?

1.

Affiliate transparency. We earn commissions from some tools we review. This never changes our scores or recommendations. If a tool scores poorly, we say so -- even if we earn from it.

2.

Real testing, not demos. We do not write reviews from screenshots or marketing materials. Every tool is tested with real business data over multiple sessions.

3.

Regular updates. Tools change. Pricing changes. Features get added or removed. We re-test and update reviews when tools ship major changes. Each review shows a "last updated" date.

4.

External sources. We reference vendor documentation, third-party review platforms (G2, Capterra), and industry benchmarks to provide additional context alongside our hands-on testing.

5.

Seller-first perspective. We evaluate tools from the perspective of solo sellers and small teams -- not enterprise marketing departments. If a tool is powerful but too complex or expensive for most sellers, we say so.

Have a question about our process?

We are happy to explain how we evaluated any specific tool.