题目
BU.330.760.52.SP25 Final- Requires Respondus LockDown Browser
单项选择题
Which of the following statements is correct about pre-training and fine-tuning?
选项
A.Pre-training is an unsupervised approach.
B.Fine-tuning can use both supervised and reinforcement learning approaches.
C.GPT is a pre-trained encoder.
D.Both A and B.
查看解析
标准答案
Please login to view
思路分析
Question restatement: Which of the following statements is correct about pre-training and fine-tuning?
Option 1: 'Pre-training is an unsupervised approach.' This is generally accurate for many large language models, where pre-training often uses unsupervised or self-supervised objectives (e.g., predicting missing tokens). The key idea is learning broad representations from unlabeled data, so describing pre-training as unsupervised captures the typical method, though some ......Login to view full explanation登录即可查看完整答案
我们收录了全球超50000道考试原题与详细解析,现在登录,立即获得答案。
类似问题
Transfer learning is invariably effective. eg. Irrespective of the amount of data, we can always rely on transfer learning.
When attempting to transfer learn for an image captioning task, we must use a source dataset for image captioning or visual question answering, since image classification by itself is not similar enough
Which of the following statements about transfer learning in CNNs are correct? (mark all that apply)
What is the key difference between pre-training and fine-tuning stages in transformer model development? Hint: Lec 19, Slide 50.
更多留学生实用工具
希望你的学习变得更简单
加入我们,立即解锁 海量真题 与 独家解析,让复习快人一步!