Abstract is missing.
- IntroductionKlaus-Robert Müller. 1-5 [doi]
- Speeding LearningKlaus-Robert Müller. 7-8 [doi]
- Efficient BackPropYann LeCun, Léon Bottou, Genevieve B. Orr, Klaus-Robert Müller. 9-48 [doi]
- Regularization Techniques to Improve GeneralizationKlaus-Robert Müller. 49-51 [doi]
- Early Stopping - But When?Lutz Prechelt. 53-67 [doi]
- A Simple Trick for Estimating the Weight Decay ParameterThorsteinn S. Rögnvaldsson. 69-89 [doi]
- Controlling the Hyperparameter Search in MacKay's Bayesian Neural Network FrameworkTony Plate. 91-110 [doi]
- Adaptive Regularization in Neural Network ModelingJan Larsen, Claus Svarer, Lars Nonboe Andersen, Lars Kai Hansen. 111-130 [doi]
- Large Ensemble AveragingDavid Horn, Ury Naftaly, Nathan Intrator. 131-137 [doi]
- Improving Network Models and Algorithmic TricksKlaus-Robert Müller. 139-141 [doi]
- Square Unit Augmented, Radially Extended, Multilayer PerceptronsGary William Flake. 143-161 [doi]
- A Dozen Tricks with Multitask LearningRich Caruana. 163-189 [doi]
- Solving the Ill-Conditioning in Neural Network LearningP. Patrick van der Smagt, Gerd Hirzinger. 191-203 [doi]
- Centering Neural Network Gradient FactorsNicol N. Schraudolph. 205-223 [doi]
- Avoiding Roundoff Error in Backpropagating DerivativesTony Plate. 225-230 [doi]
- Representing and Incorporating Prior Knowledge in Neural Network TrainingKlaus-Robert Müller. 231-233 [doi]
- Transformation Invariance in Pattern Recognition - Tangent Distance and Tangent PropagationPatrice Y. Simard, Yann LeCun, John S. Denker, Bernard Victorri. 235-269 [doi]
- Combining Neural Networks and Context-Driven Search for On-line, Printed Handwriting Recognition in the NewtonLarry S. Yaeger, Brandyn J. Webb, Richard F. Lyon. 271-293 [doi]
- Neural Network Classification and Prior Class ProbabilitiesSteve Lawrence, Ian Burns, Andrew D. Back, Ah Chung Tsoi, C. Lee Giles. 295-309 [doi]
- Applying Divide and Conquer to Large Scale Pattern Recognition TasksJürgen Fritsch, Michael Finke. 311-338 [doi]
- Tricks for Time SeriesKlaus-Robert Müller. 339-341 [doi]
- Forecasting the Economy with Neural Nets: A Survey of Challenges and SolutionsJohn Moody. 343-367 [doi]
- How to Train Neural NetworksRalph Neuneier, Hans-Georg Zimmermann. 369-418 [doi]
- Big Learning and Deep Neural NetworksGrégoire Montavon, Klaus-Robert Müller. 419-420 [doi]
- Stochastic Gradient Descent TricksLéon Bottou. 421-436 [doi]
- Practical Recommendations for Gradient-Based Training of Deep ArchitecturesYoshua Bengio. 437-478 [doi]
- Training Deep and Recurrent Networks with Hessian-Free OptimizationJames Martens, Ilya Sutskever. 479-535 [doi]
- Implementing Neural Networks EfficientlyRonan Collobert, Koray Kavukcuoglu, Clément Farabet. 537-557 [doi]
- Better Representations: Invariant, Disentangled and ReusableGrégoire Montavon, Klaus-Robert Müller. 559-560 [doi]
- Learning Feature Representations with K-MeansAdam Coates, Andrew Y. Ng. 561-580 [doi]
- Deep Big Multilayer Perceptrons for Digit RecognitionDan Claudiu Ciresan, Ueli Meier, Luca Maria Gambardella, Jürgen Schmidhuber. 581-598 [doi]
- A Practical Guide to Training Restricted Boltzmann MachinesGeoffrey E. Hinton. 599-619 [doi]
- Deep Boltzmann Machines and the Centering TrickGrégoire Montavon, Klaus-Robert Müller. 621-637 [doi]
- Deep Learning via Semi-supervised EmbeddingJason Weston, Frédéric Ratle, Hossein Mobahi, Ronan Collobert. 639-655 [doi]
- Identifying Dynamical Systems for Forecasting and ControlGrégoire Montavon, Klaus-Robert Müller. 657-658 [doi]
- A Practical Guide to Applying Echo State NetworksMantas Lukosevicius. 659-686 [doi]
- Forecasting with Recurrent Neural Networks: 12 TricksHans-Georg Zimmermann, Christoph Tietz, Ralph Grothmann. 687-707 [doi]
- Solving Partially Observable Reinforcement Learning Problems with Recurrent Neural NetworksSiegmund Duell, Steffen Udluft, Volkmar Sterzing. 709-733 [doi]
- 10 Steps and Some Tricks to Set up Neural Reinforcement ControllersMartin Riedmiller. 735-757 [doi]