Never bored with recommendations:
PhD, RankSys, Mendeley


Saúl Vargas — Data Scientist

Mendeley Ltd.

Outline

  1. About me
  2. PhD: Novelty and Diversity in Recommender Systems
  3. RankSys: Java 8 Recommender Systems framework
  4. Mendeley: recommendations for researchers!
  5. Q&A time

About me

  • Sep 2010 — Dec 2014 PhD Autonomous University of Madrid (Spain), supervised by Prof. Pablo Castells. Novelty and Diversity Evaluation and Enhancement in Recommender Systems.
  • Jan 2015 — Nov 2015 Post-doc at the Terrier Team at the University of Glasgow. Social Media analysis for the SUPER FP7 EU project.
  • Nov 2015 — Present Data Scientist @ Mendeley / Elsevier. Recommender Systems research and development.
UAM UoG Mendeley

Free time: programming, TV series & music lover, lazy runner.

Recommender Systems

They're everywhere!

Spotify Netflix Amazon
Twitter Facebook Linkedin

Movie Recommendations

Music Recommendations

People Recommendations

PhD: Novelty and Diversity in Recommender Systems

An obvious, redundant recommendation

Recommending The Beatles
The Long Tail The Long Tail

The Long Tail

A few of the most popular items concentrate high sales volumes.

The rest — the Long Tail — have moderate ones.

Promoting sales in the Long Tail has benefits:

  • For the business: make the most of the catalog, niche markets ➡ Sales diversity
  • For the users: less obvious, unexpected recommendations ➡ Novelty

The Harry Potter Effect: personalized recommendations have a bias towards recommending popular items.

Different perspectives on Novelty and Diversity

  • Long Tail Novelty: avoid popular items.
  • Unexpectedness: surprise the user.
  • Temporal Novelty: the same again?
  • Intra-List Diversity: a little bit of everything.
  • Sales Diversity: make the most of the catalog.
  • Sales Novelty: differentiate yourself from competitors.

Research Goals

  • RG1: develop a clear common methodological and conceptual ground for novelty and diversity in recommendations.
  • RG2: explore the application of theories and methods from Information Retrieval diversity to Recommender Systems.
  • RG3: devise recommendation-specific techniques for enhancing the diversity within recommendations.
  • RG4: proposing new techniques for alleviating the popularity bias in recommendations.

Item Novelty Models (I)

Idea: the novelty and diversity of recommendations can be decomposed as the aggregation of the individual novelty of the items that compose them.

Item Novelty Model \[nov: \mathcal{I} \rightarrow [0, \infty)\]

An item novelty model is defined by two elements:

  • Novelty context.
  • Measurement approach.

Item Novelty Models (II)

Novelty Context: “Novel with respect to what?”

Novelty or diversity perspective ⬌ context $\theta$

  • Long Tail Novelty: what the community watches/listens/buys.
  • Unexpectedness: what the user is familiar with.
  • Temporal Novelty: past recommendations.
  • Intra-List Diversity: the rest of items in a recommendation.
  • Sales Diversity: what is recommended to other users.
  • Sales Novelty: what others systems recommend.

Item Novelty Models (III)

Measurement approaches: “Estimate the degree of novelty numerically.”

\[nov: \mathcal{I} \times \Theta \rightarrow [0, \infty)\]

We propose two families of measurements:

  • Discovery-based: probability of being discovered.
  • Distance-based: dissimilarity from context.

Item Novelty Models (IV)

What are these item novelty models useful for?

  • Generalise existing metrics and define new ones.
  • Understand the relationship between perspectives and metrics.
  • Combine with relevance and ranking discounts.
  • Define re-ranking strategies.

Intent-aware Diversity (I)

IR Diversity
  • Problem: Search/recommendation list diversification is been addressed differently in Recommender Systems and Information Retrieval.
  • Proposal: study the adaptation of the Intent-Aware family of metrics and diversification method in IR to RS.

Intent-aware Diversity (II)

We define and extract user aspect spaces by means of the features of the items in the recommendation domain.

Example of user aspects
\[p(f | u) = \frac{|\{ i \in \mathcal{I}_u : f \in \mathcal{F}_i \}|}{\sum_{f' \in \mathcal{F}} |\{ i \in \mathcal{I}_u : f' \in \mathcal{F}_i \}|}\]

Intent-aware Diversity (III)

Step 1: Extract User Sub-Profiles

Extract User Sub-Profiles

Intent-aware Diversity (IV)

Step 2: Generate Recommendations for Sub-Profiles

Generate Recommendations for Sub-Profiles

Intent-aware Diversity (V)

Step 3: Combine Recommendations

Combine Recommendations

Genre Diversity (I)

  • Problem: in the previous contribution we were using genres (as in music, movies or books) as proxies for user aspects.
  • Proposal: study the properties of genres as source of diversity.
  • We identify three requirements for genre-based diversity: coverage, redundancy and size-awareness.

Genre Diversity (II)

drama
Drama (broad)
western
Western (narrow)

Genre Diversity (III)

Genre occurrence as Binomial Distribution

coin toss
\begin{align} P(X_g = k_g) = \binom{N}{k_g} p_g^k (1 - p_g)^{N - k_g} \end{align}

$k_g$: times a genre $g$ appears in the recommendation

$N$: recommendation list size

$p_g$: probability of selecting genre $g$

➡ Estimated as a combination of genre generality and user preference.

Genre Diversity (IV)

When a genre $g$ is not covered, how serious is this?

Binomial coverage:

\[ P(X_g = 0) = (1 - p_g)^N \]

Genre Diversity (V)

Two occurrences of the same genre $g$ are already redundant, but how strong is the effect?

Binomial redundancy:

\[ P(X_g \geq k_g | X_g \neq 0) \]
patience

PhD: Conclusions

  • Long, arduous but rewarding walk.
  • Solo mission.
  • Relatively high impact of work, the (last) diversity guy (in RecSys).
  • Need for code publication and better experimentation.
RankSys

RankSys
Java 8 Recommender Systems
framework for novelty, diversity
and much more

http://ranksys.org

Introduction

  • RankSys is a general-purpose Recommender Systems framework, including:
    • Core components for building recommender systems.
    • Implementations of collaborative filtering algorithms.
    • Novelty and diversity stuff of my PhD thesis.
    • Efficient in-memory structures using compression.
  • Is it production ready? Probably not. Designed for (my) research purposes.
  • Designed specifically for ranking tasks (more on this later).
  • Lightweight and fast.

Alternatives

  • Apache Mahout: Java, a Frankenstein monster of machine learning algorithms, now under a deep redesign.
  • MyMediaLite: C#, very good implementation, currently unmaintained
  • Lenskit: Java, developed by GroupLens.
  • LibRec: Java, claims to be faster than others.
  • PredictionIO: Python, with commercial support, recently adquired by Salesforce.

Increasing parallelism

  • Evolution of the computing infrastructure at IRG@UAM:
    • Pegasus: 8 cores, 32GB RAM.
    • Galactica: 24 cores, 256GB RAM
    • Zephyr: 64 cores, 512GB RAM
  • Taking advantage of the increasing number of cores in a single machine seems useful here.
  • Before Java 8, mulithreaded programming was cumbersome and error-prone.
Galactica Logo Battlestar Galactica

Java 8


// PRECISION AT K                    
return recommendation.getItems().stream()
        .limit(cutoff)
        .filter(is -> isRelevant(is.id))
        .count() / (double) cutoff;
                    

// GENERATING RECOMMENDATIONS AND EVALUATING THEM IN PARALLEL
targetUsers.parallelStream()
    .map(u -> alg.getRecommendation(u, 100))
    .mapToDouble(rec -> prec.evaluate(rec))
    .average();
                    

What about Clojure?

Collaborative Filtering

  • Nearest neighbours: user and item-based.
  • Matrix factorisation: Hu et al., Pilaszy et al.
  • Topic models: pLSA, LDA.

Very fast and memory-efficient implementations (with type-specific data structures provided by fastutil)

Novelty and Diversity

  • Evaluation: metrics from SoA and my thesis.
  • Ehnahcement: re-ranking approaches, greedy diversification, inverted neighborhoods.

Compression for in-memory collaborative filtering data.

  • Use of integer compression techniques like in search indices.
  • E.g. the Netflix Prize dataset with 800MB of heap size!

The "Rank" in RankSys

  • Many frameworks include extended support for the rating prediction problem.
  • Influenced by the Netflix prize: $1M for minimizing RMSE.
  • Later research has found that RMSE does not correlate at all with user satisfaction.
  • Ratings are not missing at random: what you see gives more information that what you rate.
  • Learning to Rank: point-wise vs. pair or list-wise approaches.
  • RankSys tunes the algorithms for ranking tasks and will never target rating prediction.

Ongoing work

  • 3rd party storage: DB, Redis, etc.
  • Factorisation machines.
  • Context-aware recommendations.
  • Content-based recommendations.

Future work

  • RESTful interface.
  • Benchmarking and comparing performance w.r.t. alternatives.
  • Command line commands for non-Java programmers.
  • Deep learning???

RankSys: Conclusions

  • Great personal experience and learning.
  • Many researchers are not really into programming (specially Java 8, Maven, etc.).
  • Strong competition: alternatives are more popular, but not better.
  • Writing documentation and unit test is hard, but is a must.
  • A good way to transition to industry...
Mendeley Ltd.

Recommendations for researchers!

The Data Science team @ Mendeley

Kris Jack
Kris Jack
Maya Hristakeva
Maya Hristakeva
Benj Pettit
Benj Pettit
Davide Magatti
Davide Magatti
Saúl Vargas
Saúl Vargas

What is Mendeley?

Mendeley is a research platform that provides tools to assist researchers during their workflow.

Research workflow

Read & Organise

reference management, cloud storage, metadata extraction & integrated search Desktop Client

Search & Discover

Mendely Suggest

Collaborate & Network

Mendeley Profile

Mendeley is now part of Elsevier

Elsevier adquisition

Recommendations at Mendeley

Vision: To build a personalised research advisor that helps you to organise your work, contextualise it within the global body of research, and connect you with relevant researchers and artifacts.

We do this by providing:
  • Research article recommendations.
  • Profile recommendations for our social network.

Data Sources

  • Mendeley:
    • User libraries: what the add, what the read, etc.
    • Article metadata: title, abstract, authors, keywords, tags.
    • Groups and user network.
  • Scopus: more metadata, citation network.
  • ScienceDirect: usage logs, article downloads.
Data sources Scopus ScienceDirect

Article recommendations: Collaborative Filtering

  • User-based: similarity users also read...
    • Efficient for us because #users << #items.
    • Generated daily in batch jobs.
  • Item-based: similar to article X...
    • Expensive in our setting.
    • Easily reusable in many scenarios.
  • Matrix factorisation:
    • Best model in literature.
    • Too slow and expensive.
  • Topic models:
    • Very good descriptors of research interests.
    • Many uses!

Article recommendations: Content-based

  • Using Elasticsearch with titles and abstracts.
  • Uses: last article added/read, research interests.
  • We require filters to avoid out-of-discipline articles.

Profile recommendations

  • Network-based recommendations:
    • Friend of a friend.
    • Common followees.
    • Most followed in discipline.
  • Authorship and readership-based recommendations:
    • Co-authors.
    • People you read.

Challenges

  • Interdisciplinary research.
  • Researchers shifting research topics.
  • Leveraging user feedback and fatigue.
  • Increase retention.

Q&A time!