Skip to main content

Table 9 Comparing the deep learning models contributed to the code generation task

From: MarianCG: a code generation transformer model inspired by machine translation

Model

Encoder

Decoder

CoNaLa dataset

DJANGO dataset

Name

Encoder size

Name

Decoder size

Dataset size

BLEU score

Exact match score

Dataset size

BLEU score

Exact match score

MarianCG (Ours)

MarianEncoder

6 layers

MarianDecoder

6 layers

26K

34.43

10.2

19K

90.41

81.83

TranX + BERT w/ mined [28]

bert-base-uncased

12 layers

Grammar based

-

100K

34.2

5.8

19K

79.86

81.03

BERT + TAE [26]

bert-base-uncased

12 layers

Transformer Decoder

4 layers

100K

33.41

-

19K

-

81.77

MarianCG (Ours)

MarianEncoder

6 layers

MarianDecoder

6 layers

13K

30.9

6.2

   

BART W/ Mined [27]

facebook/bart-base

6 layers

facebook/bart-base

6 layers

13K + Question bodies

30.55

-

   

BART Base [27]

facebook/bart-base

6 layers

facebook/bart-base

6 layers

13K

26.24

-

  Â