Skripsi
PARAPHRASE GENERATION UNTUK TEKS BAHASA INDONESIA MENGGUNAKAN LONG SHORT TERM MEMORY (LSTM)
In the field of Natural Language Processing, there is a research area called Paraphrase Generation, which refers to the process of producing sentences that are semantically equivalent to the input sentence. With the advancement of neural methods, paraphrase generation, which previously relied on template-based approaches or static machine translation, can now utilize one of the neural methods, which is using Long Short Term Memory with the Sequence to Sequence model architecture, an architecture that consists of Encoder-Decoder layers. This study aims to investigate the performance of the LSTM model with the Sequence to Sequence architecture and incorporate the optimization technique using Attention in performing Paraphrase Generation for Indonesian language texts. The research results indicate that based on the evaluation of automatic metrics such as BLEU for each unigram, bigram, trigram, and quadgram, the model's scores are 0.48, 0.34, 0.23, and 0.15, respectively. Meanwhile, based on the evaluation of the automatic metric METEOR, the model's score is 0.51. In addition to the evaluation of automatic metrics, there is a questionnaire-based testing to assess the relevance and grammatical correctness of the paraphrased sentences generated by the model, based on human evaluation. The average scores for relevance and grammatical correctness, ranging from 1 to 5, are 3.71 and 4.10, respectively.
Inventory Code | Barcode | Call Number | Location | Status |
---|---|---|---|---|
2307005098 | T125426 | T1254262023 | Central Library (Referens) | Available but not for loan - Not for Loan |
No other version available