题目
题目

BU.330.760.52.SP25 Final- Requires Respondus LockDown Browser

单项选择题

Which of the following statements is correct about pre-training and fine-tuning?

选项
A.Pre-training is an unsupervised approach.
B.Fine-tuning can use both supervised and reinforcement learning approaches.
C.GPT is a pre-trained encoder.
D.Both A and B.
查看解析

查看解析

标准答案
Please login to view
思路分析
Question restatement: Which of the following statements is correct about pre-training and fine-tuning? Option 1: 'Pre-training is an unsupervised approach.' This is generally accurate for many large language models, where pre-training often uses unsupervised or self-supervised objectives (e.g., predicting missing tokens). The key idea is learning broad representations from unlabeled data, so describing pre-training as unsupervised captures the typical method, though some ......Login to view full explanation

登录即可查看完整答案

我们收录了全球超50000道考试原题与详细解析,现在登录,立即获得答案。

类似问题

更多留学生实用工具

加入我们,立即解锁 海量真题独家解析,让复习快人一步!