Product recommendations are one of the most powerful tools available to an e-commerce retailer. When done well, they surface the right product to the right customer at the right moment — accelerating discovery, increasing basket size, and driving the kind of repeat purchases that build customer lifetime value. When done poorly, they are noise: irrelevant carousels that customers learn to ignore and that contribute nothing to the bottom line.
The difference between good recommendations and bad ones is not a question of effort — most retailers are trying. The difference is in the underlying approach: the algorithms used, the placements chosen, the business rules applied, and the discipline of measurement that drives continuous improvement. This guide covers all four.
Understanding Recommendation Algorithms
The first thing to understand is that there is no single best recommendation algorithm. Different approaches have different strengths, and the most effective recommendation systems combine multiple approaches in ways that play to each one's strengths while mitigating its weaknesses.
Collaborative Filtering
Collaborative filtering is the oldest and most widely understood recommendation approach. The core idea is simple: if customer A and customer B have similar purchase histories, the products that B has bought but A has not are likely candidates for recommendation to A. In the item-based variant (more computationally scalable), you find products that are frequently purchased together and use those co-occurrence patterns to recommend related items.
Collaborative filtering's strength is that it captures genuine user behavior patterns without requiring any explicit knowledge about the products themselves. It discovers relationships that no one programmed — like the fact that customers who buy a particular brand of running shoes tend to also buy a specific type of running sock. These emergent patterns are often the most commercially valuable.
Its weaknesses are equally well documented. It suffers from the cold-start problem: new products have no purchase history, so they cannot be recommended through collaborative filtering alone. It tends to over-recommend popular items, creating a popularity feedback loop that disadvantages new or niche products. And it is fundamentally backward-looking — it tells you what customers have done, not what they are trying to do right now.
Content-Based Filtering
Content-based filtering takes a different approach: instead of looking at what similar customers have done, it looks at the attributes of products themselves. If a customer has been browsing blue floral dresses, a content-based system will recommend other blue floral dresses — products that share attribute patterns with what the customer has shown interest in.
This approach handles the cold-start problem elegantly — a new product with no purchase history can still be recommended immediately as long as its attributes are well-specified. It is also more explainable: you can tell a customer exactly why a product was recommended (because you showed interest in similar colors and styles).
The limitation is that content-based filtering can only recommend products similar to what the customer has already seen. It tends to create a filter bubble, surfacing more of the same rather than facilitating discovery of genuinely new product categories the customer might love.
Hybrid and Session-Based Models
The state of the art in 2025 is hybrid models that combine collaborative and content-based signals, augmented with real-time session context. A session-based model tracks the sequence of actions a customer takes during their current visit — which products they viewed, in what order, what they searched for, how long they spent on each page — and uses that sequence to infer purchase intent.
The transformer architecture that powers large language models has proven particularly effective for session-based recommendation. By treating a shopping session as a sequence of "tokens" (product interactions), a transformer model can capture the kind of complex, context-dependent intent patterns that earlier approaches missed. A customer who searches for "running shoes" then views three products with trail running specifications and then navigates to the men's apparel section is exhibiting a specific intent pattern — one that a good session model can recognize and use to surface highly relevant recommendations.
Madewithinter's recommendation engine uses a hybrid architecture: collaborative signals from historical purchase data, content signals from product attribute embeddings, and real-time session signals processed through a transformer model. Each signal type contributes to the final recommendation score, weighted dynamically based on how much data is available for each customer.
Placement Strategy: Where Recommendations Work Best
Even the best algorithm will underperform if recommendations are placed in low-visibility locations or at moments when the customer is not receptive. Placement strategy is as important as algorithm quality.
Homepage Recommendations
For returning customers, the homepage is your most valuable personalization canvas. A personalized homepage — showing products aligned with a customer's browsing history, purchase patterns, and inferred preferences — can dramatically outperform a generic promotional banner in both engagement and conversion.
The key principle for homepage recommendations is to demonstrate that you know the customer. Lead with a headline like "Welcome back — picked for you" rather than a generic "Trending products." The specificity of the recommendation matters less than the signal that the experience was created for this individual.
For new visitors, homepage recommendations should blend editorial curation with data-driven popularity signals. Trending products, staff picks, and new arrivals give first-time visitors a sense of your brand voice while still surfacing commercially valuable inventory.
Product Detail Page (PDP) Recommendations
The product detail page is where recommendation strategy becomes most nuanced. There are three distinct recommendation types that can appear on a PDP, each serving a different commercial purpose.
"Frequently bought together" recommendations leverage collaborative filtering to surface products that complement the item being viewed. For a camera, this might include a compatible lens, a carrying case, and a memory card. These recommendations increase basket size by surfacing genuinely useful add-ons at the moment of highest purchase intent.
"Similar products" recommendations use content-based filtering to surface alternatives — products that share attributes with the one being viewed. These are valuable when a customer is still in the consideration phase and might not buy the specific product they are viewing. If they leave the current product without adding to cart, a visible similar alternatives section might retain the purchase within your catalog.
"Customers also viewed" recommendations use collaborative signals to surface products that tend to accompany the current product in browsing sessions. These are more exploratory and help customers discover adjacent products they might not have found through navigation.
Cart Recommendations
The cart is a uniquely high-intent moment — a customer has already committed to one or more purchases and is within a few clicks of completing a transaction. Recommendations here should be surgical: focused on items that genuinely complement what is in the cart, priced appropriately (low-cost add-ons convert better than high-cost alternatives), and presented with a clear "add to cart" action that does not interrupt the checkout flow.
The worst thing you can do in the cart is distract a customer from completing their purchase. Recommendations in this context should be clearly subordinate to the primary action — checking out — and easy to dismiss if not relevant.
Email Recommendations
Email is the channel where personalization has perhaps the longest history and the most mature practice. Product recommendations in emails — whether in triggered abandoned cart messages, post-purchase follow-ups, or weekly digest campaigns — consistently outperform generic promotional content in both open rates and conversion.
The key consideration for email recommendations is timing. An abandoned cart email sent within one hour of cart abandonment converts at rates 5-10x higher than one sent 24 hours later. A post-purchase recommendation email sent within 48 hours of delivery — when the product is fresh and the customer experience is top of mind — performs significantly better than one sent at a fixed 30-day interval.
A/B Testing Your Recommendations
No recommendation configuration is optimal from day one. The only way to know whether a particular algorithm, placement, or design choice is working is to test it rigorously against an alternative. A/B testing is not optional for serious recommendation optimization — it is the engine of improvement.
When designing recommendation A/B tests, the most important discipline is testing one variable at a time. If you simultaneously change the algorithm, the placement, and the visual design, you will not know which change drove any observed difference in outcomes. Pick one variable, test it with sufficient traffic to reach statistical significance, and then move to the next.
Common recommendation variables to test include: algorithm type (collaborative vs. hybrid vs. session-based), number of items displayed (4 vs. 6 vs. 8), widget title and framing language, placement position on the page, visual presentation (grid vs. carousel), and whether to include pricing or rating information in the recommendation card.
Merchandising Rules: Blending AI with Business Logic
Pure algorithmic recommendations will sometimes surface products that are algorithmically optimal but commercially inappropriate. A clearance item with a rock-bottom price might score highly on conversion probability — but recommending it on the homepage of a premium brand undermines your positioning. A product that is technically in-stock but allocated to a major wholesale partner might convert online but create fulfillment problems.
Merchandising rules allow you to layer business logic on top of algorithmic recommendations. Common rules include: excluding out-of-stock products, prioritizing products with high margin contributions, boosting recently launched products to accelerate their discovery, suppressing products from certain categories on certain pages, and ensuring a minimum representation of sale items to support promotional goals.
The art is in calibrating the weight of merchandising rules relative to algorithmic scores. Over-ruled systems lose the personalization benefit; under-ruled systems create commercial problems. Most sophisticated retailers run merchandising rules as hard filters (absolute exclusions) combined with soft boosts and demotions that influence ranking without overriding the algorithmic score entirely.
Metrics That Matter
Measuring recommendation performance requires a clear framework of metrics at different levels of the funnel.
Click-through rate (CTR) measures the percentage of recommendation impressions that result in a product click. It is a measure of relevance and presentation quality — a recommendation that catches the eye and matches the customer's immediate interest will generate a click. Typical CTR benchmarks for well-configured recommendation widgets range from 3% to 12% depending on placement and category.
Add-to-cart rate measures the percentage of recommendation clicks that result in an add-to-cart event. This metric captures the quality of the product match — a customer might click on a recommendation because it looks appealing, but only add it to cart if it genuinely meets their needs. The gap between CTR and add-to-cart rate is a diagnostic signal about recommendation quality.
Revenue per recommendation is the ultimate measure: how much incremental revenue is generated per recommendation impression? This metric accounts for both conversion rate and average order value associated with recommended products. It is the number that justifies your investment in recommendation technology and the number you should be optimizing above all others.
Track these metrics by placement (homepage vs. PDP vs. cart vs. email), by customer segment (new vs. returning, high-CLV vs. low-CLV), and over time (week-over-week, cohort-based). The patterns in the data will tell you where your biggest optimization opportunities lie.
Getting Started: An Implementation Checklist
If you are building or upgrading your recommendation system, the following checklist will help you approach the implementation systematically. First, audit your current data collection: are you capturing all behavioral events across the funnel? Second, define your recommendation strategy by placement before writing a line of code. Third, choose an algorithm approach appropriate to your catalog size and data maturity. Fourth, implement one placement at a time, measuring impact before expanding. Fifth, establish your A/B testing cadence — at least one active recommendation test at any given time. Sixth, define your merchandising rules in collaboration with your merchandising and brand teams. Seventh, set up your measurement framework before you launch, so you are capturing baseline data from day one.
Done well, product recommendations are one of the most powerful revenue drivers available to an e-commerce team. The investment in getting them right — in algorithm quality, placement strategy, testing discipline, and measurement rigor — pays compounding returns. Start simple, measure everything, and iterate relentlessly.