Usaremos la implementación de HuggingFace in en Pytorch.
BERT es un modelo con incrustaciones de posición absoluta, por lo que generalmente se recomienda rellenar (padding) las entradas a la derecha en lugar de a la izquierda.
BERT fue entrenado con el modelado de lenguaje enmascarado (MLM) y los objetivos de predicción de la siguiente oración (NSP). Es eficiente para predecir tokens enmascarados y en NLU en general, pero no es óptimo para la generación de texto.
La tarea de PLN es análisis de sentimiento. El primer experimento lo hacemos en inglés y luego en Español. Para esta tarea esta bién usar el modelo uncase (eliminando mayúsculas).
from transformers import BertModel, BertTokenizer
import torch
model = BertModel.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# sentencia
sentence = 'I love París'
# tokenización
tokens = tokenizer.tokenize(sentence)
# print
print(tokens)
['i', 'love', 'paris']
tokens = ['[CLS]'] + tokens + ['[SEP]']
print(tokens)
['[CLS]', 'i', 'love', 'paris', '[SEP]']
El tamaño de la lista de tokens es 5. Supongamos que hemos decidido que el tamaño máximo se las sentencias será 7. BERT está constuido para aceptar sentencias hasta de tamaño 512. Todas las sentencias deben tener el mismo tamaño.
## Relleno
max_sentence_size = 7
pad_size = max_sentence_size - len(tokens)
for i in range(pad_size):
tokens = tokens + ['[PAD]']
print(tokens)
['[CLS]', 'i', 'love', 'paris', '[SEP]', '[PAD]', '[PAD]']
## máscara de atención
attention_mask = [1 if i!= '[PAD]' else 0 for i in tokens]
print(attention_mask)
[1, 1, 1, 1, 1, 0, 0]
token_ids = tokenizer.convert_tokens_to_ids(tokens)
print(token_ids)
[101, 1045, 2293, 3000, 102, 0, 0]
token_ids = torch.tensor(token_ids).unsqueeze(0) # unsuezze es para agregar una dimensión al comienzo (varias sentencias)
attention_mask = torch.tensor(attention_mask).unsqueeze(0)
print(token_ids) # tensor([[ 101, 1045, 2293, 3000, 102, 0, 0]])
tensor([[ 101, 1045, 2293, 3000, 102, 0, 0]])
print(token_ids)
print(attention_mask)
tensor([[ 101, 1045, 2293, 3000, 102, 0, 0]]) tensor([[1, 1, 1, 1, 1, 0, 0]])
¿Cómo hace esto con tensorflow?
model regresa una lista con dos objetos:
out = model(token_ids, attention_mask = attention_mask)
last_hidden_state, pooler_output = out.last_hidden_state, out.pooler_output # out[0], out[1]
print(last_hidden_state.shape)
torch.Size([1, 7, 768])
# out es un diccionario. Podemos obtener las claves así:
out.keys()
odict_keys(['last_hidden_state', 'pooler_output'])
En esta sección revisamos como extraer las incrustaciones (embeddings) que salen de cada una de las capas codificadoras (12 por ejemplo en el modelo base). Algunos veces estop se hace para extraer diferentes features de las sentencias.
Por ejemplo en la tarea NER (name entity recognition) los investigadores han usado las incrustaciones de las diferentes capas, para hacr promedios pesados de algunas de ellas y con esto han podido mejorar la exactitud en la precisión.
Para hacer esto, es necesario instanciar el modelo preentrenado con la opción output_hidden_states=True:
model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
out = model(token_ids, attention_mask=attention_mask)
last_hidden_state, pooler_output, hidden_states = \
out.last_hidden_state, out.pooler_output, out.hidden_states
print(last_hidden_state.shape)
print(pooler_output.shape)
print(len(hidden_states)) # esta es una lista conteniendo las
# incrutaciones de todas las capas codificadoras
torch.Size([1, 7, 768]) torch.Size([1, 768]) 13
Observe que hidden_states tiene 13 elementos. La capa 0 corresponde a la incrustación de la capa de entrada, luego los elementos 1 a 12 corresponden a las incrustaciones de de salida de cada una de las 12 capas codificadoras.
La representación de los token de la última capa oculta (codificadora) pueden ser obtenidos así:
Esta es la representación contextual final de los token.
Las incrustaciones de cada capa i, se obtienen mediante *hidden_states[i]:
# Incrutaciones de la capa de entrada
input_embedding = hidden_states[0]
print(input_embedding.shape)
# incrustaciones de la capa codificadora 11
embedding_11 = hidden_states[11]
print(embedding_11.shape)
torch.Size([1, 7, 768]) torch.Size([1, 7, 768])
help(out)
Help on BaseModelOutputWithPoolingAndCrossAttentions in module transformers.modeling_outputs object: class BaseModelOutputWithPoolingAndCrossAttentions(transformers.file_utils.ModelOutput) | BaseModelOutputWithPoolingAndCrossAttentions(last_hidden_state: torch.FloatTensor = None, pooler_output: torch.FloatTensor = None, hidden_states: Union[Tuple[torch.FloatTensor], NoneType] = None, past_key_values: Union[Tuple[Tuple[torch.FloatTensor]], NoneType] = None, attentions: Union[Tuple[torch.FloatTensor], NoneType] = None, cross_attentions: Union[Tuple[torch.FloatTensor], NoneType] = None) -> None | | Base class for model's outputs that also contains a pooling of the last hidden states. | | Args: | last_hidden_state (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`): | Sequence of hidden-states at the output of the last layer of the model. | pooler_output (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, hidden_size)`): | Last layer hidden-state of the first token of the sequence (classification token) further processed by a | Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence | prediction (classification) objective during pretraining. | hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): | Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) | of shape :obj:`(batch_size, sequence_length, hidden_size)`. | | Hidden-states of the model at the output of each layer plus the initial embedding outputs. | attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): | Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, | sequence_length, sequence_length)`. | | Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | heads. | cross_attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` and ``config.add_cross_attention=True`` is passed or when ``config.output_attentions=True``): | Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, | sequence_length, sequence_length)`. | | Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the | weighted average in the cross-attention heads. | past_key_values (:obj:`tuple(tuple(torch.FloatTensor))`, `optional`, returned when ``use_cache=True`` is passed or when ``config.use_cache=True``): | Tuple of :obj:`tuple(torch.FloatTensor)` of length :obj:`config.n_layers`, with each tuple having 2 tensors | of shape :obj:`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and optionally if | ``config.is_encoder_decoder=True`` 2 additional tensors of shape :obj:`(batch_size, num_heads, | encoder_sequence_length, embed_size_per_head)`. | | Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if | ``config.is_encoder_decoder=True`` in the cross-attention blocks) that can be used (see | :obj:`past_key_values` input) to speed up sequential decoding. | | Method resolution order: | BaseModelOutputWithPoolingAndCrossAttentions | transformers.file_utils.ModelOutput | collections.OrderedDict | builtins.dict | builtins.object | | Methods defined here: | | __eq__(self, other) | | __init__(self, last_hidden_state: torch.FloatTensor = None, pooler_output: torch.FloatTensor = None, hidden_states: Union[Tuple[torch.FloatTensor], NoneType] = None, past_key_values: Union[Tuple[Tuple[torch.FloatTensor]], NoneType] = None, attentions: Union[Tuple[torch.FloatTensor], NoneType] = None, cross_attentions: Union[Tuple[torch.FloatTensor], NoneType] = None) -> None | | __repr__(self) | | ---------------------------------------------------------------------- | Data and other attributes defined here: | | __annotations__ = {'attentions': typing.Union[typing.Tuple[torch.Float... | | __dataclass_fields__ = {'attentions': Field(name='attentions',type=typ... | | __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,or... | | __hash__ = None | | attentions = None | | cross_attentions = None | | hidden_states = None | | last_hidden_state = None | | past_key_values = None | | pooler_output = None | | ---------------------------------------------------------------------- | Methods inherited from transformers.file_utils.ModelOutput: | | __delitem__(self, *args, **kwargs) | Delete self[key]. | | __getitem__(self, k) | x.__getitem__(y) <==> x[y] | | __post_init__(self) | | __setattr__(self, name, value) | Implement setattr(self, name, value). | | __setitem__(self, key, value) | Set self[key] to value. | | pop(self, *args, **kwargs) | od.pop(k[,d]) -> v, remove specified key and return the corresponding | value. If key is not found, d is returned if given, otherwise KeyError | is raised. | | setdefault(self, *args, **kwargs) | Insert key with a value of default if key is not in the dictionary. | | Return the value for key if key is in the dictionary, else default. | | to_tuple(self) -> Tuple[Any] | Convert self to a tuple containing all the attributes/keys that are not ``None``. | | update(self, *args, **kwargs) | D.update([E, ]**F) -> None. Update D from dict/iterable E and F. | If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] | If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v | In either case, this is followed by: for k in F: D[k] = F[k] | | ---------------------------------------------------------------------- | Methods inherited from collections.OrderedDict: | | __ge__(self, value, /) | Return self>=value. | | __gt__(self, value, /) | Return self>value. | | __iter__(self, /) | Implement iter(self). | | __le__(self, value, /) | Return self<=value. | | __lt__(self, value, /) | Return self<value. | | __ne__(self, value, /) | Return self!=value. | | __reduce__(...) | Return state information for pickling | | __reversed__(...) | od.__reversed__() <==> reversed(od) | | __sizeof__(...) | D.__sizeof__() -> size of D in memory, in bytes | | clear(...) | od.clear() -> None. Remove all items from od. | | copy(...) | od.copy() -> a shallow copy of od | | items(...) | D.items() -> a set-like object providing a view on D's items | | keys(...) | D.keys() -> a set-like object providing a view on D's keys | | move_to_end(self, /, key, last=True) | Move an existing element to the end (or beginning if last is false). | | Raise KeyError if the element does not exist. | | popitem(self, /, last=True) | Remove and return a (key, value) pair from the dictionary. | | Pairs are returned in LIFO order if last is true or FIFO order if false. | | values(...) | D.values() -> an object providing a view on D's values | | ---------------------------------------------------------------------- | Class methods inherited from collections.OrderedDict: | | fromkeys(iterable, value=None) from builtins.type | Create a new ordered dictionary with keys from iterable and values set to value. | | ---------------------------------------------------------------------- | Data descriptors inherited from collections.OrderedDict: | | __dict__ | | ---------------------------------------------------------------------- | Methods inherited from builtins.dict: | | __contains__(self, key, /) | True if the dictionary has the specified key, else False. | | __getattribute__(self, name, /) | Return getattr(self, name). | | __len__(self, /) | Return len(self). | | get(self, key, default=None, /) | Return the value for key if key is in the dictionary, else default. | | ---------------------------------------------------------------------- | Static methods inherited from builtins.dict: | | __new__(*args, **kwargs) from builtins.type | Create and return a new object. See help(type) for accurate signature.
Los pesos de atención después de la atención softmax, se utilizan para calcular el promedio ponderado en las cabezas de autoatención. Son obtenidos pasando al modelo output_attentions=True
model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True, output_attentions=True)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
out = model(token_ids, attention_mask=attention_mask)
last_hidden_state, pooler_output, hidden_states, attentions = \
out.last_hidden_state, out.pooler_output, out.hidden_states, \
out.attentions
print(len(attentions))
12
print(attentions[11].shape)
torch.Size([1, 12, 7, 7])
La salida se explica así:
Por lo tanto tenemos la salida de las 12 cabezas de atención para la sentencia.
Vamos a darle una mirada a los pesos de atención de la última capa codificadora
attention11 = attentions[11].squeeze()#elimina la dimensión de batch.
attention11.shape
torch.Size([12, 7, 7])
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
# versión con decode utf-8
def plot_attention_head_cp(in_tokens, translated_tokens, attention):
# The plot is of the attention when a token was generated.
# The model didn't generate `<START>` in the output. Skip it.
translated_tokens = translated_tokens[1:]
ax = plt.gca()
ax.matshow(attention)
ax.set_xticks(range(len(in_tokens)))
ax.set_yticks(range(len(translated_tokens)))
labels = [label.decode('utf-8') for label in in_tokens.numpy()]
ax.set_xticklabels(
labels, rotation=90)
labels = [label.decode('utf-8') for label in translated_tokens.numpy()]
ax.set_yticklabels(labels)
def plot_attention_head(in_tokens, translated_tokens, attention):
# The plot is of the attention when a token was generated.
# The model didn't generate `<START>` in the output. Skip it.
#translated_tokens = translated_tokens[1:]
ax = plt.gca()
pcm = ax.matshow(attention)
ax.set_xticks(range(len(in_tokens)))
ax.set_yticks(range(len(translated_tokens)))
labels = [label for label in in_tokens]
ax.set_xticklabels(
labels, rotation=90)
labels = [label for label in translated_tokens]
ax.set_yticklabels(labels)
head = attention11[0]
head.shape
torch.Size([7, 7])
head = head.detach().numpy()
plot_attention_head(in_tokens=tokens, translated_tokens=tokens, attention=head)
def plot_attention_weights(sentence, translated_tokens, attention_heads):
in_tokens = sentence
#in_tokens = tokenizers.pt.tokenize(in_tokens).to_tensor()
#in_tokens = tokenizers.pt.lookup(in_tokens)[0]
#in_tokens
fig = plt.figure(figsize=(16, 8))
for h, head in enumerate(attention_heads):
ax = fig.add_subplot(3, 4, h+1)
plot_attention_head(in_tokens, translated_tokens, head)
ax.set_xlabel(f'Head {h+1}')
plt.tight_layout()
plt.show()
heads = attention11.detach().numpy()
plot_attention_weights(sentence=tokens, translated_tokens=tokens,
attention_heads=heads)