I.3.6.4: Block Newton Methods
In block Newton methods we update using It follows that at a point where the derivatives vanish we have and in particular . The iteration matrix is the same as the one in section block optimization methods or, and consequently the convergence rate is the same as well. We will again have the same rate if we make more than one Newton step in each block update.
Of course the single-step block Newton method does not guarantee decrease of the loss function, and consequently needs to be safeguarded in some way.