The Commons

Back to Results

Patent Title: Building scalable n-gram language models using maximum likelihood maximum entropy n-gram models

Assignee: IBM
Patent Number: US5640487
Issue Date: 06-17-1997
Application Number:
File Date:06-07-1995


Abstract: The present invention is an n-gram language modeler which significantly reduces the memory storage requirement and convergence time for language modelling systems and methods. The present invention aligns each n-gram with one of "n" number of non-intersecting classes. A count is determined for each n-gram representing the number of times each n-gram occurred in the training data. The n-grams are separated into classes and complement counts are determined. Using these counts and complement counts factors are determined, one factor for each class, using an iterative scaling algorithm. The language model probability, i.e., the probability that a word occurs given the occurrence of the previous two words, is determined using these factors.

Notes:

Link to USPTO

IBM Pledge dated 1/11/2005