Non-Fiction Books:

Advances in Multimodal Information Retrieval and Generation

Click to share your rating 0 ratings (0.0/5.0 average) Thanks for your vote!
  • Advances in Multimodal Information Retrieval and Generation on Hardback by Man Luo
  • Advances in Multimodal Information Retrieval and Generation on Hardback by Man Luo
$149.99
Releases

Pre-order to reserve stock from our first shipment. Your credit card will not be charged until your order is ready to ship.

Available for pre-order now

Buy Now, Pay Later with:

4 payments of $37.50 with Afterpay Learn more

Pre-order Price Guarantee

If you pre-order an item and the price drops before the release date, you'll pay the lowest price. This happens automatically when you pre-order and pay by credit card.

If paying by PayPal, Afterpay, Zip or internet banking, and the price drops after you have paid, you can ask for the difference to be refunded.

If Mighty Ape's price changes before release, you'll pay the lowest price.

Availability

This product will be released on

Delivering to:

It should arrive:

  • 18-25 October using International Courier

Description

This book provides an extensive examination of state-of-the-art methods in multimodal retrieval, generation, and the pioneering field of retrieval-augmented generation.  The work is rooted in the domain of Transformer-based models, exploring the complexities of blending and interpreting the intricate connections between text and images.  The authors present cutting-edge theories, methodologies, and frameworks dedicated to multimodal retrieval and generation, aiming to furnish readers with a comprehensive understanding of the current state and future prospects of multimodal AI.  As such, the book is a crucial resource for anyone interested in delving into the intricacies of multimodal retrieval and generation.  Serving as a bridge to mastering and leveraging advanced AI technologies in this field, the book is designed for students, researchers, practitioners, and AI aficionados alike, offering the tools needed to expand the horizons of what can be achieved in multimodal artificial intelligence.

Author Biography:

Man Luo, Ph.D. is a Research Fellow at Mayo Clinic, Arizona.  She received her Ph.D. at ASU in 2023. Her research interests lie in Natural Language Processing (NLP) and Computer Vision (CV) with a specific focus on open-domain information retrieval under multi-modality settings and Retrieval-Augmented Generation Models.  She has published first author at top conferences such as AAAI, ACL and EMNLP. She serves as the guest editor of PLOS Digital Medicine Journal. She has served as reviewer for AAAI, IROS, EMNLP, NAACL, ACL conferences.  Dr. Luo is an organizer of the ODRUM workshops at CVPR 2022 and CVPR 2023 and Multimodal4Health at ICHI 2024.  Tejas Gokhale, Ph.D., is an Assistant Professor at the University of Maryland, Baltimore County.  He received his Ph.D. from Arizona State University in 2023, M.S. from Carnegie Mellon University in 2017, and B.E.(Honours) from Birla Institute of Technology and Science, Pilani in 2015.  Dr. Gokhale is a computer vision researcher working on robust visual understanding with a focus on connection between vision and language, semantic data engineering, and active inference.  His research draws inspiration from the principles of perception, communication, learning, and reasoning.  He is an organizer of the ODRUM workshops at CVPR 2022 and CVPR 2023, SERUM tutorial at WACV 2023, and RGMV tutorial at WACV 2024.  Neeraj Varshney is a Ph.D. candidate at ASU and works in natural language processing, primarily focusing on improving the efficiency and reliability of NLP models. He has published multiple papers in top-tier NLP and AI conferences including ACL, EMNLP, EACL, NAACL, and AAAI and is a recipient of the SCAI Doctoral Fellowship, GPSA Outstanding Research Award, and Jumpstart Research Grant.  He has served as a reviewer for several conferences including ACL, EMNLP, EACL, and IJCAI and has also been selected as an outstanding reviewer by EACL'23 conference. Yezhou Yang, Ph.D., is an Associate Professor with the School of Computing and Augmented Intelligence (SCAI), Arizona State University.  He received his Ph.D. from University of Maryland.  His primary interests lie in Cognitive Robotics, Computer Vision, and Robot Vision, especially exploring visual primitives in human action understanding from visual input, grounding them by natural language as well as high-level reasoning over the primitives for intelligent robots.  Chitta Baral, Ph.D., is a Professor with the School of Computing and Augmented Intelligence (SCAI), Arizona State University and received his Ph.D. from University of Maryland. His primary interests lie in Natural Language Processing (NLP), Computer Vision (CV), the intersection of NLP and CV, and Knowledge Representation and Reasoning.Chitta Baral is a Professor with the School of Computing and Augmented Intelligence (SCAI), Arizona State University, and received his PhD from University of Maryland. His primary interests lie in Natural Language Processing (NLP), Computer Vision (CV), the intersection of NLP and CV, and Knowledge Representation and Reasoning.
Release date Australia
October 11th, 2024
Audience
  • Professional & Vocational
Illustrations
30 Illustrations, color; Approx. 150 p. 30 illus. in color.
Pages
150
ISBN-13
9783031578151
Product ID
38747565

Customer previews

Nobody has previewed this product yet. You could be the first!

Write a Preview

Help & options

Filed under...