Conversational Machine Learning
Spring 2018, CMU 10-608


Instructor: Igor Labutov, Bishan Yang
Lectures: Tuesdays, 6:30-8:50pm in 4303 Gates Hillman Center (GHC)
Office Hours: Tuesday 4:00-5:00PM (GHC 8120) and Wednesday 6:10-7:10pm (GHC 8112) or by appointment

Class goals

Machine Learning today is largely about finding patterns in large amounts of data. But as personal devices that interact with us in natural language become ubiquitous (e.g., Siri, Google Now), they open an amazing possibility of letting users teach machines in natural language, similar to how we teach each other. Conversation, as an interface to machine learning systems, opens a new paradigm that both unifies several existing machine learning paradigms (e.g., active learning, supervised learning), but also brings a unique set of advantages and challenges that lie at the intersection of machine learning and natural language processing.

This course will be structured as a well-defined mini-challenge (project) course. We will present you with several well-defined open problems and provide you with recently collected datasets that can get you started immediately! But you will be free to define your own problem using that data as well, or come up with your own problem entirely. There are no other constraints, and since this is a new area of research, you can (and should) be creative and as crazy in coming up with methods to tackle them. At the same time, we will provide guidance via readings and class-based hacking sessions. This course is a great way to get introduced to open problems in a collaborative and structured environment.

Some of the problems that we will be looking at throughout the course are:

  • Semantic Parsing
    • How do we transform natural language instructions into executable "programs"?
    • Models for semantic parsing: from grammar-based to end-to-end neural models
    • Semantic parsing with conversational context
  • Machine Learning from Natural Language
    • Transforming natural language instructions into machine learning models
    • Dialog-based learning
    • Correcting models from natural language feedback
  • Machine Reading
    • End-to-end Models for reading and answering questions

Note that this course is primarily discussion and project based. The goal of the course is to produce original work that addresses a problem in one of the areas above. Details about project guidelines, timeline and expectation can be found here:

    Project guidelines, expectations and timeline

Datasets

We will initially be releasing three datasets that can be used for projects in this course. Please take a look at them as you are thinking of project ideas:
  • Teaching concepts in natural language (Email): dataset of emails (labeled with categories) paired with natural language explanation of those categories, such as "emails about meetings", "announcement emails", etc. The task is to be able to learn a machine learning classifier based on natural language descriptions.
  • Question answering over personal narrative: dataset of simulated personal stories (5 different synthetic worlds), paired with questions about various facts described in the stories. The task is to perform question answering based on the taught knowledge. The difficulty is that many of the questions require reasoning about state-changes, and integrating multiple facts.
  • Natural language feedback to semantic parsers: dataset of natural language questions, paired with original interpretation of a semantic parser, which in many cases was wrong (i.e., the question was parsed incorrectly). For each such question, there is a human explanation of why the interpreation of the semantic parser was wrong. The task is to learn the semantic parser from such natural language corrections (which can be interpreted as noisy labels)

Schedule

The following schedule is tentative, it will continuously change based on time constraints and interest of the people in the class. Lecture notes will be added as lectures progress.

Date Topic Readings Presenters Slides Assignment
1/16 Introduction to the course. (A)[1,2,4] Igor and Bishan Intro 1. Read assigned readings
2. Select paper presentation slot
1/23 Executable Semantic Parsing. Introduction to SEMPRE SEMPRE tutorial
Assignment 1
Igor and Bishan Slides
SEMPRE files
Assignment 1 due (Due Midnight 1/30)
1/30 Executable Semantic Parsing: continuation (CFG grammars) Igor and Bishan Slides
2/6 Challenges: Introduction to core course topics and datasets. Project proposal pitches. Introduction to deep learning. Igor and Bishan Slides
2/13 SEMPRE training; Neural semantic parsing (Pytorch tutorials) (B)[1,4,5] Igor and Bishan Slides Project proposals Due Midnight 2/18
2/20 Neural semantic parsing (Pytorch tutorials) Assignment 2
dataset for assignment
Igor and Bishan Pytorch tutorials Assignment 2 (Due Midnight 3/9)
2/27 Proposal presentations
3/6 Dialog, Machine Reading (C)[2,3]
(E)[1]
Igor and Bishan Slides
3/13 No class
3/20 Concept learning by explainations; Semantic parsing with neural networks [1], [2] Christian and Satya
3/27 Neural code generation; Transfer learning for neural semantic parsing [1], [2] Clay and Satya
4/3 Learning with Latent Language; Neural semantic parsing [1], [2] Brian and Vidhan
4/10 Semantic parsing from user feedback; Zero-shot relation extraction; BiDAF [1], [2], [3] Sophia, Prasoon, and Akhil
4/17 Data recombination for semantic parsing; RL guided by natural language; Sentence embeddings [1], [2], [3] Dharini, Varun, and Akhil
4/24 Zero-shot concept learning; Learning semantic parsers from feedback Shashank and Igor
5/1 Final project presentations

Resources

Readings

(A) Executable Semantic Parsing

  1. Liang, Percy, and Christopher Potts. "Bringing machine learning and compositional semantics together." Annu. Rev. Linguist. 1.1 (2015): 355-376.
  2. Liang, Percy. "Learning executable semantic parsers for natural language understanding." Communications of the ACM 59.9 (2016): 68-76.
  3. J. Krishnamurthy and T. Mitchell. 2012. Weakly supervised training of semantic parsers. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL)
  4. Berant, Jonathan, et al. "Semantic Parsing on Freebase from Question-Answer Pairs." EMNLP. Vol. 2. No. 5. 2013.
  5. P. Pasupat and P. Liang. 2015. Compositional semantic parsing on semi-structured tables. In Association for Computational Linguistics (ACL).
  6. S. Reddy, M. Lapata, and M. Steedman. 2014. Large-scale semantic parsing without question-answer pairs. Transactions of the Association for Computational Linguistics (TACL) 2(10):377–392.
  7. Andreas, Jacob, Dan Klein, and Sergey Levine. "Learning with Latent Language." arXiv preprint arXiv:1711.00482 (2017).
  8. Joint concept learning and semantic parsing from natural language explanations S Srivastava, I Labutov, T Mitchell - Proceedings of the 2017 Conference on Empirical, 2017
  9. S Srivastava, I Labutov, T Mitchell. 'Learning Classifiers from Declarative Language'. Learning from Limited Data Workshop, NIPS 2017.
  10. Wang, Sida I., Percy Liang, and Christopher D. Manning. "Learning language games through interaction." arXiv preprint arXiv:1606.02447 (2016).
  11. Integrated Learning of Dialog Strategies and Semantic Parsing Aishwarya Padmakumar and Jesse Thomason and Raymond J. Mooney. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017), pp. 547--557, Valencia, Spain, April 2017.

(B) Neural Semantic Parsing

  1. Dong, Li, and Mirella Lapata. "Language to logical form with neural attention." arXiv preprint arXiv:1601.01280 (2016).
  2. Jia, Robin, and Percy Liang. "Data recombination for neural semantic parsing." arXiv preprint arXiv:1606.03622 (2016).
  3. From language to programs: bridging reinforcement learning and maximum marginal likelihood. Kelvin Guu, Panupong Pasupat, Evan Zheran Liu, Percy Liang. Association for Computational Linguistics (ACL), 2017.
  4. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, Luke Zettlemoyer. "Learning a Neural Semantic Parser from User Feedback." ACL (2017).
  5. C. Liang, J. Berant, Q. Le, and K. D. F. N. Lao. 2017. Neural symbolic machines: Learning semantic parsers on Freebase with weak supervision. In Association for Computational Linguistics (ACL).
  6. Neural Semantic Parsing with Type Constraints for Semi-Structured Tables. Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. ACL (2017)

(C) Learning through Dialog

  1. A Neural Conversational Model. Oriol Vinyals and Quoc Le. arXiv preprint arXiv:1604.06045, 2016.
  2. Jason Weston. Dialog-based language learning. arXiv preprint arXiv:1604.06045, 2016.
  3. Learning through Dialogue Interactions by Asking Questions. Jiwei Li, Alexander Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston . ICLR 2017
  4. Dialogue Learning With Human-in-the-Loop. Jiwei Li, Alexander Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston. ICLR 2017
  5. A. Bordes and J. Weston. 2017. Learning end-to-end goal-oriented dialog. In International Conference on Learning Representations (ICLR).
  6. Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings. He He, Anusha Balakrishnan, Mihail Eric and Percy Liang. Association for Computational Linguistics (ACL), 2017
  7. J. Li, W. Monroe, A. Ritter, D. Jurafsky, M. Galley, and J. Gao. 2016. Deep reinforcement learning for dialogue generation. In Empirical Methods in Natural Language Processing (EMNLP).
  8. Composite Task-Completion Dialogue Policy Learning via Hierarchical Deep Reinforcement Learning. Baolin Peng, Xiujun Li, Lihong Li, Jianfeng Gao, Asli Celikyilmaz, Sungjin Lee, Kam-Fai Wong. EMNLP 2017.

(D) Reinforcement Learning

  1. Richard S. Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems.
  2. Asynchronous Methods for Deep Reinforcement Learning (Actor-critic). Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu. Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1928-1937, 2016.
  3. Mapping Instructions and Visual Observations to Actions with Reinforcement Learning. Dipendra Misra, John Langford, and Yoav Artzi. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017.
  4. Policy shaping: Integrating human feedback with reinforcement learning. Team: Shane Griffith, Kaushik Subramanian, Jon Scholz, Charles Isbell, and Andrea Thomaz. Advances in Neural Information Processing Systems (NIPS), 2013.
  5. An Actor-Critic Algorithm for Sequence Prediction. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio. ICLR 2017.
  6. Guiding Reinforcement Learning Exploration Using Natural Language. Brent Harrison, Upol Ehsan, Mark O. Riedl.
  7. Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations. Upol Ehsan, Brent Harrison, Larry Chan, Mark O. Riedl.

(E) Reading Comprehension/Question Answering

  1. S. Sukhbaatar, A. Szlam, J. Weston, R. Fergus. End-To-End Memory Networks. NIPS, 2015.
  2. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (NIPS).
  3. Reading Wikipedia to Answer Open-Domain Questions. Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes. In ACL 2017.
  4. Bidirectional attention flow for machine comprehension. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi ICLR 2017
  5. Zero-shot relation extraction via reading comprehension. Omer Levy, Minjoon Seo, Eunsol Choi, Luke Zettlemoyer. CoNLL 2017
  6. Adversarial examples for evaluating reading comprehension systems. Robin Jia, Percy Liang. Empirical Methods in Natural Language Processing (EMNLP), 2017.

Grading

The grade is determined by a paper presentation you need to do, your participation in class (asking good questions, making connections between topics etc.) as well as a final project. The final project can be a small innovation on top of methods and algorithms presented in the course, or your own project idea on topics covered in the course. The course grade is a weighted average of:
10% Participation
20% Paper presentations
10% Programming assignments
10% Paper quizes
50% Final project

Prerequisites

This course assumes familiarity with basic NLP concepts, machine learning, deep learning.

Web design: Anton Badev