Sidd Karamcheti

Teaching computers how to read, one word at a time.

My name is Sidd, and I'm a senior at Brown University.

I study both Computer Science and Literary Arts.

Feel free to scroll down, or ask me a question below.

I use machine learning to parse the query and direct you to the appropriate section.


I was raised in New York, New York, but since middle school, I've been living in Palo Alto, CA. I spent four years at Gunn High School, where I was a part of its very cool Robotics Team.

After high school, I spent my freshman year of college at UC Berkeley, where I studied Electrical Engineering and Computer Science. However, after being introduced to natural language processing, and coming to realize that I wanted a more integrated education in both literature and computer science, I decided I needed a change. With the help of a few really supportive professors, I was able to transfer to Brown University.

I'm now a senior at Brown, dual-concentrating in both Computer Science and the Literary Arts, a discipline of English with a focus on creative writing. I'm lucky to be advised by both Professor Eugene Charniak, and Professor Stefanie Tellex. I'm also currently a Research Intern at Bloomberg L.P, advised by Dr. Gideon Mann.

Now, I spend a lot of time doing research in getting computers to interface with language in all it's forms, by working on problems in human-robot interaction (how can we get robots to understand instructions?), question-answering (how do we read and understand text?), and semantic parsing (how do we map language to programs?). I also do work in automated program fuzz-testing via deep reinforcement learning, at Bloomberg. Check out my research!


Higher Education:

Brown UniversityProvidence, Rhode Island
Sc.B Computer Science, A.B. Literary Arts
Technical GPA 4.0/4.0

While at Brown, I study both Computer Science and Literary Arts. My goal is to explore how we can use computers to reason about language in all its forms (especially literature).

The following is a list of some of my relevant coursework:

  1. CSCI 2950K: Deep Learning for NLP
  2. CSCI 2951K: Language and Planning for Robotics
  3. CSCI 2951F: Learning and Sequential Decision Making
  4. CSCI 1460: Computational Linguistics
  5. CSCI 1420: Machine Learning
  6. CSCI 1570: Algorithms
  7. CSCI 1380: Distributed Systems
  8. CSCI 1260: Compilers and Program Analysis
  9. CSCI 1950Y: Formal Methods and Logic for Systems

August 2015 - Present

University of California - BerkeleyBerkeley, California
B.S. Electrical Engineering and Computer Science
Technical GPA 3.92/4.0

At Berkeley, I was lucky enough to be the recipient of the
Regent's and Chancellor's Scholarship, through which I met my advisor, Professor Ras Bodik.

The following is a list of some of my relevant coursework:

  1. CS 61A: A Structure and Interpretation of Computer Programs
  2. CS 61B: Data Structures
  3. CS 61C: Great Ideas in Computer Architecture
  4. CS 70: Discrete Mathematics and Probability Theory
  5. EE 16A: Designing Information Devices and Systems

August 2014 - 2015

Research & Work Experience

Research Experience:

CTO Research Group Bloomberg LP
Research Intern - Advised by Gideon Mann

Current Research: Reinforcement Learning for Smart Program Analysis and Automatic Bug Detection (Program “Fuzzing”).

Brief Summary: There are many existing methods for automatic bug detection, utilizing randomness or other naive methods to test programs, resulting in suboptimal performance. How do we develop intelligent agents capable of leveraging these tools in different contexts, in order to optimize the rate of discovering new vulnerabilities?

May 2017 - Present

H2R (Human to Robots Lab)Brown
Undergraduate Researcher - Advised by Stefanie Tellex

Current Research: Weakly Supervised Language Grounding via Reinforcement Learning, Language and Partial Observability

Previous: Grounding Language to Actions and Goals (RoboNLP, 2017), Interpreting Human-Robot Instructions of Varied Complexities with Abstract Markov Decision Processes (RSS, 2017).

Brief Summary: Language provides a clean and intuitive way to communicate with robots. However, interpretation is difficult, as inherent in understanding are problems of identifying specificity and intent. How do we approach these problems, and develop models that can both accurately interpret human-robot instructions, and use the obtained information to efficiently execute tasks?

January 2016 - Present

BLLIP (Natural Language)Brown
Undergraduate Researcher - Advised by Eugene Charniak

Current Research: Semantic Parsing, End-to-End Methods for Interpretable Question-Answering, Bias Detection in News

Previous: Interpretable Question Answering via Grounding to External World Updates (In Review).

Brief Summary: Existing state-of-the-art methods for question-answering on short stories require significant amounts of data, and often result in uninterpretable learned representations that live as numeric vectors in a deep neural network. How do we develop models that more sample efficient, with learned representations that are easily understood by users and other systems?

January 2016 - Present

Industry Experience:

WealthfrontRedwood City, California
Software Engineering Intern

While at Wealthfront, I worked primarily on the backend, working on parts of several production systems.

May - August 2016

WritelabBerkeley, California
Natural Language Processing/Machine Learning Intern

While at Writelab, I designed and developed a series of algorithmic systems for identification and evaluation of literary features in text.

One such system was used for clausal analysis and modifier branching.
Another was a system for identifying thesis statements and topic progression in academic and nonfiction writing using coreference resolution.

May - November 2015

AutoGrid SystemsRedwood City, California
Software Development/Research Intern

As an intern at AutoGrid systems, I learned about distributed, real-time systems and machine learning. Furthermore, I was exposed to several techniques in data science, finding patterns in large amounts of data. My work at AutoGrid culminated in a research paper on Demand Response Optimization in the Smart Grid, which I submitted to the Intel Science Talent Search.

June - November 2013

Teaching Experience:

CSCI 1380 - Distributed SystemsBrown CS
Head Teaching Assistant

Chose to TA over natural language processing/machine learning courses for unique opportunity to develop new course material and lectures, with new Professor Theo Benson.

Responsible for revising course assignments (written in Golang), adding additional assignments covering block-chains & cryptocurrencies. Additionally, responsible for holding office hours, lab sections, and providing any necessary support. Projects include: Tapestry (Distributed Hash Table), Raft (Consensus Protocol), and Puddlestore (Distributed File System).

November 2017 - Present

CSCI 2950K/1470 - Deep LearningBrown CS
Head Teaching Assistant

As the Head TA for CSCI 2950K - Deep Learning in Fall 2016 and the subsequent undergraduate version CSCI 1470 in Fall 2017, my main responsibility is to design course assignments, hold office hours and recitations, and provide support for the weekly lab sections.

Course is built on top of Tensorflow. Topics covered: Feed-Forward Neural Nets, Convolutional Neural Nets for Image Classification, Recurrent Neural Nets for Language Modeling, Sequence-to-Sequence Models for Translation, Deep Reinforcement Learning, Variational Autoencoders.

June 2016 - Present

CSCI 1460 - Computational LinguisticsBrown CS
Head Teaching Assistant

HTA for undergraduate course on statistical natural language processing, Spring 2017. Topics covered: Language Modeling (N-grams), Machine Translation (IBM Models), Part-of-Speech Tagging (Hidden Markov Models), Parsing (Chart Parsing), and Topic Modeling (Latent Dirichlet Allocation).

January - June 2017

CS 61AUC Berkeley EECS
Group Tutor/Reader

I taught small recitation sections meant to supplement work done in lecture, holding additional sections and office hours when necessary. I also contributed to the course textbook, Composing Programs.

January - May 2015

Publications & Projects

Accepted Publications:

  1. "A Tale of Two DRAGGNs: A Hybrid Approach for Interpreting Action and Goal-Oriented Instructions,"
    Siddharth Karamcheti, Edward Williams, Dilip Arumugam, Mina Rhee, Nakul Gopalan, Lawson Wong, Stefanie Tellex.
    1st Workshop on Language Grounding in Robotics (RoboNLP) at Association for Computational Linguistics (ACL) 2017.
    Best Paper Award Recipient.

  2. "Accurately and Efficiently Interpreting Human-Robot Instructions of Varying Granularities,"
    Dilip Arumugam*, Siddharth Karamcheti*, Nakul Gopalan, Lawson Wong, Stefanie Tellex.
    Robotics: Science and Systems (RSS) 2017.
    Invited for submission to Autonomous Robots Journal (RSS Special Issue).

Submissions in Review:

  1. "World Interaction Networks: Grounding Natural Language to World Updates with Minimal Supervision,"
    Siddharth Karamcheti, Eugene Charniak, Stefanie Tellex.

  2. "Modeling Latent Attention within Neural Networks,"
    Chris Grimm, Dilip Arumugam, Siddharth Karamcheti, Lawson Wong, Michael Littman.


Tensorflow and Deep Learning Tutorials

Implementations of several state-of-the-art Deep Learning models, for tutorial purposes. These are a few of the models I've implemented: Variational Autoencoders, Convolutional Neural Networks, and End-to-End Memory Networks,


Final project for CSCI 1260: Compilers. A fully functioning compiler that compiles a DeCaf (a lightweight language based on Java) source file, to C code. Written in Python, the PyDecaf compiler covers all steps from Parsing, to Abstract Syntax Tree Generation, to Semantic and Syntax Checking, to Code Generation.

Targeted Question-Answering

A small project that actually powers the search algorithm for this very site (scroll up). It is a classification algorithm built using a multi-class Perceptron that I wrote myself, and it directs people to the appropriate section depending on the query they issue.

CYK Parser for Context-Free Grammars

A small project designed to gain familiarity with Racket, a clean and elegant functional programming language. I implemented the Cocke-Younger-Kasami algorithm, a simple parser for context-free grammars, with added built-in support for Grammar creation and parse-tree generation.


I'm always looking for new opportunities to learn new skills.
Please feel free to contact me with any job opportunities, research projects I could help with, or for any further information about me.

I look forward to hearing from you!