Non-Fiction Books:

Elements Of Deep Learning Theory

Click to share your rating 0 ratings (0.0/5.0 average) Thanks for your vote!

Format:

Hardback
$200.99 was $257.99
Releases

Pre-order to reserve stock from our first shipment. Your credit card will not be charged until your order is ready to ship.

Available for pre-order now

Buy Now, Pay Later with:

4 payments of $50.25 with Afterpay Learn more

Pre-order Price Guarantee

If you pre-order an item and the price drops before the release date, you'll pay the lowest price. This happens automatically when you pre-order and pay by credit card.

If paying by PayPal, Afterpay, Zip or internet banking, and the price drops after you have paid, you can ask for the difference to be refunded.

If Mighty Ape's price changes before release, you'll pay the lowest price.

Availability

This product will be released on

Delivering to:

It should arrive:

  • 7-14 February using International Courier

Description

While the field of Deep Learning has been advancing at frightening speeds during the last ten years, we do not have a plausible theoretical explanation of its success yet. One can safely say that Deep Learning works, but nobody really understands why. Nevertheless, starting around five years before now, a decent number of theoretical papers specifically concerning neural nets started to emerge at top machine learning venues. This makes us think that the core of Deep Learning theory has already started to crystallize.The goal of the present book is to present these core concepts of Deep Learning theory to readers so that they could directly dive into recent papers of this area. For this purpose, each chapter elaborates a simple model or a classical result in details first and then discusses possible generalizations and more recent developments of the same idea.We have to warn the reader that the present book is not a mathematical manuscript. Not all results here are stated as formal theorems and not all theorems are provided with complete and rigorous proofs. Many of the theorems of the book are proven up to some technical lemmas, while for some there is only a proof sketch. This conforms with the main idea of the book: present and illustrate concepts rather than reproduce all the results.The book, in its present form, covers the following topics: uniform generalization bounds, PAC-bayesian generalization bounds, double descent phenomena, infinitely-wide networks, implicit bias of gradient descent, loss landscape, gradient descent convergence guarantees, and initialization strategies.
Release date Australia
January 31st, 2025
Audiences
  • Professional & Vocational
  • Tertiary Education (US: College)
Pages
200
ISBN-13
9789811271267
Product ID
36421017

Customer previews

Nobody has previewed this product yet. You could be the first!

Write a Preview

Help & options

Filed under...