Free shipping on orders over $99
Pretrained Transformers for Text Ranking

Pretrained Transformers for Text Ranking

BERT and Beyond

by Jimmy LinRodrigo Nogueira and Andrew Yates
Paperback
Publication Date: 29/10/2021

Share This Book:

  $129.00
or 4 easy payments of $32.25 with
afterpay
This item qualifies your order for FREE DELIVERY
The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing (NLP) applications.This book provides an overview of text ranking with neural network architectures known as transformers, of which BERT (Bidirectional Encoder Representations from Transformers) is the best-known example. The combination of transformers and self-supervised pretraining has been responsible for a paradigm shift in NLP, information retrieval (IR), and beyond. This book provides a synthesis of existing work as a single point of entry for practitioners who wish to gain a better understanding of how to apply transformers to text ranking problems and researchers who wish to pursue work in this area. It covers a wide range of modern techniques, grouped into two high-level categories: transformer models that perform reranking in multi-stage architectures and dense retrieval techniques that perform ranking directly. Two themes pervade the book: techniques for handling long documents, beyond typical sentence-by-sentence processing in NLP, and techniques for addressing the tradeoff between effectiveness (i.e., result quality) and efficiency (e.g., query latency, model and index size). Although transformer architectures and pretraining techniques are recent innovations, many aspects of how they are applied to text ranking are relatively well understood and represent mature techniques. However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this book also attempts to prognosticate where the field is heading.
ISBN:
9783031010538
9783031010538
Category:
Computational linguistics
Format:
Paperback
Publication Date:
29-10-2021
Language:
English
Publisher:
Springer International Publishing AG
Country of origin:
Switzerland
Dimensions (mm):
235x191mm
Weight:
0.62kg

This title is in stock with our Australian supplier and should arrive at our Sydney warehouse within 2 - 3 weeks of you placing an order.

Once received into our warehouse we will despatch it to you with a Shipping Notification which includes online tracking.

Please check the estimated delivery times below for your region, for after your order is despatched from our warehouse:

ACT Metro: 2 working days
NSW Metro: 2 working days
NSW Rural: 2-3 working days
NSW Remote: 2-5 working days
NT Metro: 3-6 working days
NT Remote: 4-10 working days
QLD Metro: 2-4 working days
QLD Rural: 2-5 working days
QLD Remote: 2-7 working days
SA Metro: 2-5 working days
SA Rural: 3-6 working days
SA Remote: 3-7 working days
TAS Metro: 3-6 working days
TAS Rural: 3-6 working days
VIC Metro: 2-3 working days
VIC Rural: 2-4 working days
VIC Remote: 2-5 working days
WA Metro: 3-6 working days
WA Rural: 4-8 working days
WA Remote: 4-12 working days

Reviews

Be the first to review Pretrained Transformers for Text Ranking.