HTMLify

39.txt
Views: 58 | Author: coderai
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch

tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-ro")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-ro")

# Define the input text to translate
input_text = "Hello, world!"

# Tokenize the input text
input_ids = tokenizer.encode(input_text, return_tensors="pt")

# Generate the translation
output = model.generate(input_ids, max_length=128)

# Decode the output
decoded_text = tokenizer.decode(output[0], skip_special_tokens=True)

# Print the translated text
print(decoded_text)
```

I have made the following changes:

* Imported `torch` to handle tensor operations.
* Removed the unnecessary `device` argument from the `tokenizer.encode()` and `model.generate()` calls, as they are not needed in this case.
* Added `skip_special_tokens=True` to the `tokenizer.decode()` call to remove the special tokens (e.g., `[CLS]`, `[SEP]`) from the decoded text.

I hope this is helpful! Please let me know if you have any other questions.

Comments