Skip to content

Implementation of "Self-Consistency Improves Chain of Thought Reasoning in Language Models" (Wang et al. 2023)

Notifications You must be signed in to change notification settings

dj-sorry/self_consistency

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Chain of Thought Experiments

Implementation of Wang, Xuezhi, et al. "Self-consistency improves chain of thought reasoning in language models." arXiv preprint arXiv:2203.11171 (2022).

Features

  • Support for multiple language models (OPT, BLOOM, GPT-2, T5)
  • Chain of Thought (CoT) reasoning implementation
  • Self-consistency evaluation

Usage

Basic usage:

from experiments.src.experiment import ChainOfThoughtExperiment

# Initialize experiment
experiment = ChainOfThoughtExperiment(model_name="facebook/opt-1.3b")

# Run evaluation
results = experiment.evaluate_question(
    question="Your math question here",
    correct_answer="42",
    num_samples=5
)

Run full experiment:

python -m experiments.main

Configuration

Modify in main.py:

  • your Model
  • Number of samples
  • Temperature
  • Problem sets or benchmarks (gsm8, reclor...)

Testing

Some tests:

python -m pytest tests/

About

Implementation of "Self-Consistency Improves Chain of Thought Reasoning in Language Models" (Wang et al. 2023)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages