Skip to content

Commit d45d0fb

Browse files
Merge pull request google-research#6 from cbockman/patch-1
minor README spelling
2 parents fe35475 + bed1f85 commit d45d0fb

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -671,7 +671,7 @@ accuracy numbers.
671671
* If you are pre-training from scratch, be prepared that pre-training is
672672
computationally expensive, especially on GPUs. If you are pre-training from
673673
scratch, our recommended recipe is to pre-train a `BERT-Base` on a single
674-
[preemptable Cloud TPU v2](https://cloud.google.com/tpu/docs/pricing), which
674+
[preemptible Cloud TPU v2](https://cloud.google.com/tpu/docs/pricing), which
675675
takes about 2 weeks at a cost of about $500 USD (based on the pricing in
676676
October 2018). You will have to scale down the batch size when only training
677677
on a single Cloud TPU, compared to what was used in the paper. It is

0 commit comments

Comments
 (0)