当前位置:网站首页>How transformers Roberta adds tokens

How transformers Roberta adds tokens

2022-06-25 02:33:00 Vincy_ King

1. Premise

Recently, with roberta The model needs to be added special tokens, But every time it runs GPU There will be a mistake ( There is also a pile of block)
 Insert picture description here

And in the CPU An error will be reported if the error occurs
 Insert picture description here
I searched a lot of information on the Internet , It is said that if special tokens Or modified vocab.txt, You need to add model.resize_token_embeddings(len(tokenizer)), Otherwise, the dimension will be wrong , But it's not clear where to add it , It was just added to dataset Where to deal with , But it's still wrong .

2. Specific operation

Let's show it first roberta Folder
 Insert picture description here
added_tokens.json Put what needs to be added tokens

{
    "[CH-2]": 21133, "[CH-0]": 21131, "[CH-3]": 21134, "[CH-6]": 21137, "[CH-9]": 21140, "[CH-4]": 21135, "[CH-1]": 21132, "[CH-8]": 21139, "”": 21129, "</s>": 21130, "“": 21128, "[CH-5]": 21136, "[CH-7]": 21138}

special_tokens_map.json Special tokens

{
    "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}

tokenizer_config.json discharge tokenizer Some configurations of

{
    "do_lower_case": true, "do_basic_tokenize": true, "never_split": null, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "special_tokens_map_file": "special_tokens_map.json", "name_or_path": "chinese-roberta-wwm-ext", "use_fast": true, "tokenizer_file": "tokenizer.json", "tokenizer_class": "BertTokenizer"}

stay bert Add... To the model code self.bert.resize_token_embeddings(len(self.tokenizer))

class Model(nn.Module):
    def __init__(self, config):
        super(Model, self).__init__()
        self.bert = BertModel.from_pretrained(config['bert_path'])
        self.tokenizer = BertTokenizer.from_pretrained(config['bert_path'])

        # self.tokenizer.add_tokens(self.new_tokens, special_tokens=True)
        self.bert.resize_token_embeddings(len(self.tokenizer))

        for param in self.bert.parameters():
            param.requires_grad = True

So it's done ~

原网站

版权声明
本文为[Vincy_ King]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/176/202206242304107903.html