Abstract:
In information retrieval, the users’ queries
often vary a lot from one to another. Most IR systems
use a single fixed ranking strategy to support the
information seeking task of all users for all queries
irrespective of the heterogeneity of end users and
queries.
The main problem for this work is that no
single ranking strategy performs the best for all
queries. This work considers query difference in
developing ranking function by clustering the query.
Then the cluster membership information is combined
into the learning process of the ranking function in
order to combine the ranking risks of all training
examples with different weights according to the
training query’s similarity to different query cluster
for ranking model construction.
To verify the benefit of the proposed querydependent
ranking system experiments were
conducted on TREC 2003 and TREC 2004 datasets in
LETOR (Learning To Rank) package. The ranking
accuracy of the system is evaluated by Normalized
discount cumulative gain (NDCG)evaluation metric.