Domain Specific Author Attribution Based on Feedforward Neural Network Language Models

February 24, 2016 ยท Entered Twilight ยท ๐Ÿ› International Conference on Pattern Recognition Applications and Methods

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 10.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: README.md, aggregate.m, bprop.m, cell2csv.m, confusion_array.m, dataprep.m, display_nearest_words.m, eval_data.m, extract_sentences.m, fprop.m, gen_data.m, getfile.m, idx2word.m, load_data.m, main_classify.m, main_comp_ppl.m, main_example.m, main_gen_data.m, main_gen_lm.m, main_lm_opt.m, main_porterStemmer.m, main_profile.m, main_test.m, nbest_accuracy.m, porterStemmer.m, predict_target_word.m, prep_ngram.m, process_options.m, raw, read_confusion.m, read_nbest.m, sent2idx.m, seq_ppl.m, seq_probability.m, stem, test_accuracy.m, train.m, vocab_indexing.m, word2idx.m, word_distance.m, write_data.m

Authors Zhenhao Ge, Yufang Sun arXiv ID 1602.07393 Category cs.CL: Computation & Language Cross-listed cs.LG, cs.NE Citations 4 Venue International Conference on Pattern Recognition Applications and Methods Repository https://github.com/zge/authorship-attribution/ โญ 17 Last Checked 1 month ago
Abstract
Authorship attribution refers to the task of automatically determining the author based on a given sample of text. It is a problem with a long history and has a wide range of application. Building author profiles using language models is one of the most successful methods to automate this task. New language modeling methods based on neural networks alleviate the curse of dimensionality and usually outperform conventional N-gram methods. However, there have not been much research applying them to authorship attribution. In this paper, we present a novel setup of a Neural Network Language Model (NNLM) and apply it to a database of text samples from different authors. We investigate how the NNLM performs on a task with moderate author set size and relatively limited training and test data, and how the topics of the text samples affect the accuracy. NNLM achieves nearly 2.5% reduction in perplexity, a measurement of fitness of a trained language model to the test data. Given 5 random test sentences, it also increases the author classification accuracy by 3.43% on average, compared with the N-gram methods using SRILM tools. An open source implementation of our methodology is freely available at https://github.com/zge/authorship-attribution/.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago