Search This Blog
Friday, November 12, 2010
Natural Language Processing (Stanford University)
Course topics:
1. Why NLP Is Difficult? The Hidden Structure Of Language, Why NLP Is Difficult: Newspaper Headlines, Machine Translation, Machine Translation History, Centauri/Arcturan Example. 2. Questions That Linguistics Should Answer, Machine Translation (MT), Probabilistic Language Models, Evaluation, Sparsity, Smoothing, How Much Mass To Withhold? 3. Finish Smoothing From Last Lecture, Kneser-Ney Smoothing, Practical Considerations, Machine Translation (Lecture 3), Tokenization (Or Segmentation), Statistical MT Systems, IBM Translation Models. 4. Review Statistical Mt, Model 1, The Em Algorithm, Em And Hidden Structure, Em Algorithm Demonstration In Excel Spreadsheet, Assignment 1. 5. IBM Model 1-2 (Review), IBM Model 3, IBM Model 4, IBM Model 5, Mt Evaluation, Bleu Evaluation Metric, A Complete Translation System, Flaws Of Word-Based Mt, Phrased-Based Stat-Mt. 6. Continue Of Machine Translation, Syntax-Based Model, Information Extraction & Named Entity Recognition, Information Extraction, Named Entity Extraction, Precision And Recall, Naive Bayes Classifiers. 7. Continue Of Naive Bayes Classifier, Joint V.S. Conditional Models, Features, Examples, Feature-Based Classifiers, Comparison To Naïve-Bayes, Building A Maxent Model. 8. Details Of Maxent Model, Maxent Examples, Convexity, Feature Interaction, Classification, Smoothing, Inference In Systems. 9. MEMM, Hmm Pos Tagging Model, Summary Of Tagging, NER, Information Extraction And Integration, Landscape Of IE Tasks, Machine Learning Methods, Relation Extraction, Clustering. 10. Parsing, Classical NLP Parsing, Two Views Of Linguistic Structure, Attachment Ambiguities, A Simple Prediction, What Is Parsing?, Top-Down Parsing, Bottom-Up Parsing, Parsing Of PCFGs. 11. Chomsky Normal Form, Cocke-Kasami-Younger (CKY) Constituency Parsing, Extended CKY Parsing, Efficient CKY Parsing, Evaluating Parsing Accuracy, How Good Are PCFGs?, Improve PCFG Parsing Via Unlexicalized Parsing, Markovization. 12. Guest Lecturer: Dan Jurafsky, Syntactic Variations Versus Semantic Roles, Some Typical Semantic Roles, Two Solutions To The Difficulty Of Defining Semantic Roles, PropBank, FrameNet, Information Extraction Versus Semantic Role Labeling, Evaluation Measures, Parsing Algorithm, Combining Identification And Classification Models, Summary. 13. Lexicalized Parsing, Parsing Via Classification Decisions: Charniak (1997), Sparseness & The Penn Treebank, Complexity Of Lexicalized PCFG Parsing, Complexity Of Lexicalized PCFG Parsing, Overview Of Collins’ Model, Choice Of Heads, The Latest Parsing Results, Parsing And Search Algorithms. 14. Parsing As Search, Agenda-Based Parsing, What Can Go Wrong?, Search In Modern Lexicalized Statistical Parsers, Dependency Parsing, Naïve Recognition/Parsing, Discriminative Parsing, Discriminative Models. 15. Why Study Computational Semantics?, Precise Semantics. An Early Example: Chat-80, Programming Language Interpreter, Logic: Some Preliminaries, Compositional Semantics, A Simple DCG Grammar With Semantics, Augmented CFG Rules, Semantic Grammars. 16. An Introduction To Formal Computational Semantics, Database/ Knowledgebase Interfaces, Typed Lambda Calculus, Types Of Major Syntactic Categories, Adjective And PP Modification, Why Things Get More Complex, Generalized Quantifiers, Representing Proper Nouns With Quantifiers, Questions With Answers!, How Could We Learn Such Representations? 17. Lexical Semantics, Lexical Information And NL Applications, Polysemy Vs Homonymy, WordNet, Word Sense Disambiguation, Corpora Used For WSD Work, Evaluation, Lexical Acquisition, Vector-Based Lexical Semantics, Measures Of Semantic Similarity. 18. uestion Answering Systems And Textual Inference, A Brief (Academic) History, Top Performing Systems, Answer Types In State-Of-The-Art QA Systems, Semantics And Reasoning For QA, The Textual Inference Task, Why We Need Sloppy Matching, QA Beyond TREC.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment