Algorithm: Second-order MAML with the zeroing trick
  Require: Task distribution p(T)
  Require: α,η: inner loop and meta-learning rates
  Require: Randomly initialized base-model parameters θ
1:  Set w 0 (the zeroing trick)
2:  while not done do
3:   Sample tasks {T1,TNbatch} from p(T)
4:   for i = 1,2,,Nbatch do
5:   {Ditr,Ditest}← sample from Ti
6:   θi = θ
7:   for j = 1,2,,Nstep do
8:   θi θi - αθiL(θi,Ditr)
9:   end for
10:   end for
11:   Update θ θ - ηθ i=1NbatchL(θ i,Ditest)
12:   Set w 0 (the zeroing trick)
13:  end while