湛蓝是什么颜色| 正团级是什么军衔| 为什么qq| 金牛男喜欢什么类型的女生| 桑叶有什么功效和作用| 脚底疼是什么原因引起的| bc什么意思| 一级军士长什么待遇| 小腿疼痛什么原因引起的| 胆碱酯酶是什么意思| 炖排骨什么时候放盐| 血小板计数偏低是什么意思| 百香果什么时候成熟| 难入睡是什么原因| 什么是音爆| logo是什么| 火龙果吃了有什么好处| 绞股蓝有什么作用| a型rhd阳性是什么意思| 牢固的近义词是什么| 口腔溃疡是什么引起的| 4月1号什么星座| 脂肪最终被消化成什么| 肾虚吃什么补最好| 三堂会审是什么意思| 什么是比| 老年痴呆症又叫什么名字| 癌变是什么意思| 画蛇添足的故事告诉我们什么道理| 老戏骨是什么意思| 多愁善感是什么意思| 工业氧气和医用氧气有什么区别| 微尘是什么意思| 做流产手术需要准备什么东西| 雄起是什么意思| 犀牛吃什么| 红糖大枣水有什么功效| 为什么会有湿气| 1964属什么| 唯心是什么意思| 九天什么月| 奉子成婚是什么意思| 蜜袋鼯吃什么| 怀孕查雌二醇什么作用| 肾穿刺是什么意思| 宫腔镜是检查什么的| 罗汉果泡水喝有什么作用| 国民老公是什么意思| 脚底烧热是什么原因| 什么颜色代表友谊| 耐克是什么牌子| 青菜是什么菜| 薜丁山是什么生肖| 米加参念什么| 凉席什么材质好| fl表示什么意思| 男占258女占369什么意思| 音容笑貌的意思是什么| 什么是角| cd8高是什么原因| 燕麦片热量高为什么还能减肥| 晟这个字念什么| 不解之谜的意思是什么| 72年鼠是什么命| u1是什么意思| 鹅口疮是什么原因引起的| 夏天中午吃什么| 岫玉是什么| 粳米是什么米| 喜欢出汗是什么原因| 霸气是什么意思| 记忆是什么| 胰腺炎有什么症状| 有期徒刑是什么意思| 地支是什么意思| 二氧化碳是什么东西| 85年属什么生肖| 乙状结肠ca是什么意思| dia什么意思| 花椒是什么| 熊猫血是什么血型| 书五行属性是什么| 钢镚是什么意思| 猪沙肝是什么部位| 血糖低什么症状| 鲁迅字什么| 六三年属什么生肖| 软冷冻室一般放什么东西| 七夕送老婆什么| rv是什么意思| 在家里可以做什么赚钱| 沙僧为什么被贬下凡间| 跑团什么意思| 长期口臭挂什么科| 47岁属什么| 撑台脚是什么意思| 原本是什么意思| 频繁流鼻血是什么原因| 6月14日是什么星座| 福利院是干什么的| 蜂蜜对人体有什么好处和功效| 下巴老是长痘痘是什么原因| 印度为什么叫阿三| 雅蠛蝶什么意思| 耳闷耳堵是什么原因引起的| 阴道清洁度三度什么意思| 早上起床有眼屎是什么原因| 去离子水是什么| 生育登记服务单是什么| 男性为什么长丝状疣| wa是什么意思| 尿颜色很黄是什么原因| 3.23是什么星座| 下午五点是什么时辰| 圣灵是什么意思| 过敏什么东西不能吃| 手指爆皮是什么原因| 身上有白斑块是什么原因造成的| 拉黄水是什么原因| 月经期喝什么水最好| 什么叫假性发烧| 儿时是什么意思| 什么的头发| 雾化是什么| 白带是黄色是什么原因| 5年存活率是什么意思| 富丽堂皇是什么意思| strange是什么意思| 切花是什么意思| jordan是什么意思| 右眼一直跳是什么预兆| 孩子咳嗽吃什么药效果好| 羊肚菌有什么功效和作用| 什么是三界五行| 白菜属于什么科| 孩子为什么要躲百天| 肉便器是什么东西| 牛尾炖什么最好| 银装素裹是什么意思| 什么叫同型半胱氨酸| 晚上猫叫有什么预兆| 狐媚子是什么意思| 血管检查是做什么检查| dt是什么意思| as是什么材质| 腰疼肚子疼是什么原因引起的| 韶字五行属什么| 用盐水洗脸有什么好处| 排酸对身体有什么好处| 小孩感冒发烧吃什么药| 准确值是什么意思| 笃什么意思| 桃胶是什么| 子宫b超能查出什么来| 蛞蝓是什么| 吃什么排湿气效果好| 吃什么变碱性体质最快| 油管是什么意思| 龙和什么生肖相冲| 胆固醇过高有什么危害| 用什么洗脸可以美白| 老是胃疼是什么原因| 白内障什么症状| 匹诺曹什么意思| 武松打虎打的是什么虎| 霏是什么意思| 高血脂吃什么食物最好| 惊涛骇浪什么意思| 打日本电话前面加什么| 耳浴10分钟什么意思| 脚心出汗是什么原因女| 慈禧和光绪是什么关系| 什么叫黑科技| 补肾壮阳吃什么药效果好| cro是什么职位| 宝宝手脚冰凉是什么原因| 耕田是什么意思| 为什么女人比男人长寿| 血红蛋白高是什么原因| 属猴的幸运色是什么颜色| 垂的第三笔是什么| 流星是什么| 火烧是什么| 葡萄糖氯化钠注射作用是什么| 正部级是什么级别| 鸡蛋胶是什么鱼胶| 伏天从什么时候开始| 败血症是什么症状| 嬗变什么意思| 咀嚼食用是什么意思| 副高是什么级别| 肾外肾盂是什么意思| zw是什么意思| 男生进入是什么感觉| 92年是什么年| 经心的近义词是什么| 左手经常发麻是什么原因引起的| 金丝雀是什么意思| mds医学上是什么意思| 甲油胶是什么| 便秘吃什么益生菌| 我可以组什么词| 眼镜片什么材质的好| 梦见抽血是什么预兆| 黄芪不适合什么人吃| 尤甚是什么意思| pick什么意思| om是什么意思| 虾跟什么不能一起吃| 吃饭快的人是什么性格| 7.13什么星座| 马赛克是什么| 肉丝炒什么好吃| 上面一个处下面一个日是什么字| 大便糊状什么原因| 月经推迟什么原因引起的| xxoo什么意思| 诸位是什么意思| 长期拉肚子是什么原因| 血管属于什么组织| 西游记告诉我们什么道理| asics是什么牌子| 豆角不能和什么一起吃| 金花是什么意思| 东施效颦的意思是什么| 薜丁山是什么生肖| 甘油三酯高吃什么好| 梦见掉头发是什么意思| 月球上有什么| 为什么来月经会有血块| 嗓子疼喝什么茶最有效| 12月1日是什么日子| 脱发严重应该去医院挂什么科| hcc是什么意思| 提心吊胆是什么生肖| 皮肤容易晒黑是什么原因| 4月10日是什么星座| 洋盘是什么意思| 发霉是什么菌| 什么时候开始暑伏| bodywash是什么意思| wing是什么意思| 十二点是什么时辰| 产前筛查是检查什么| 多吃鱼有什么好处| 喝酒对胃有什么伤害| 三陪是什么| 红斑狼疮是一种什么病| 雨水是什么意思| 佛法是什么意思| 怀孕一个月有什么反应| 琼瑶是什么意思| 大红袍是什么茶类| 11.6号是什么星座| 尿酸高能喝什么酒| 李子有什么营养| 上火咳嗽吃什么药| 66岁属什么生肖| 哺乳期抽烟对宝宝有什么影响| 宫腔镜是什么| 小腿疼痛挂什么科| 什么是僵尸恒星| 为什么不建议治疗幽门螺杆菌| 糖化血红蛋白是什么| 百度Jump to content

泉州刻纸欣赏:“纸痴”王培元50年刻出千幅作品

From Wikipedia, the free encyclopedia
百度 科学技术部对外保留国家外国专家局牌子。

In computing, memoization or memoisation is an optimization technique used primarily to speed up computer programs by storing the results of expensive calls to pure functions and returning the cached result when the same inputs occur again. Memoization has also been used in other contexts (and for purposes other than speed gains), such as in simple mutually recursive descent parsing.[1] It is a type of caching, distinct from other forms of caching such as buffering and page replacement. In the context of some logic programming languages, memoization is also known as tabling.[2]

Etymology

[edit]

The term memoization was coined by Donald Michie in 1968[3] and is derived from the Latin word memorandum ('to be remembered'), usually truncated as memo in American English, and thus carries the meaning of 'turning [the results of] a function into something to be remembered'. While memoization might be confused with memorization (because they are etymological cognates), memoization has a specialized meaning in computing.

Overview

[edit]

A memoized function "remembers" the results corresponding to some set of specific inputs. Subsequent calls with remembered inputs return the remembered result rather than recalculating it, thus eliminating the primary cost of a call with given parameters from all but the first call made to the function with those parameters. The set of remembered associations may be a fixed-size set controlled by a replacement algorithm or a fixed set, depending on the nature of the function and its use. A function can only be memoized if it is referentially transparent; that is, only if calling the function has exactly the same effect as replacing that function call with its return value. (Special case exceptions to this restriction exist, however.) While related to lookup tables, since memoization often uses such tables in its implementation, memoization populates its cache of results transparently on the fly, as needed, rather than in advance.

Memoized functions are optimized for speed in exchange for a higher use of computer memory space. The time/space "cost" of algorithms has a specific name in computing: computational complexity. All functions have a computational complexity in time (i.e. they take time to execute) and in space.

Although a space–time tradeoff occurs (i.e., space used is speed gained), this differs from some other optimizations that involve time-space trade-off, such as strength reduction, in that memoization is a run-time rather than compile-time optimization. Moreover, strength reduction potentially replaces a costly operation such as multiplication with a less costly operation such as addition, and the results in savings can be highly machine-dependent (non-portable across machines), whereas memoization is a more machine-independent, cross-platform strategy.

Consider the following pseudocode function to calculate the factorial of n:

function factorial (n is a non-negative integer)
    if n is 0 then
        return 1 [by the convention that 0! = 1]
    else
        return factorial(n – 1) times n [recursively invoke factorial 
                                        with the parameter 1 less than n]
    end if
end function

For every integer n such that n ≥ 0, the final result of the function factorial is invariant; if invoked as x = factorial(3), the result is such that x will always be assigned the value 6. The non-memoized implementation above, given the nature of the recursive algorithm involved, would require n + 1 invocations of factorial to arrive at a result, and each of these invocations, in turn, has an associated cost in the time it takes the function to return the value computed. Depending on the machine, this cost might be the sum of:

  1. The cost to set up the functional call stack frame.
  2. The cost to compare n to 0.
  3. The cost to subtract 1 from n.
  4. The cost to set up the recursive call stack frame. (As above.)
  5. The cost to multiply the result of the recursive call to factorial by n.
  6. The cost to store the return result so that it may be used by the calling context.

In a non-memoized implementation, every top-level call to factorial includes the cumulative cost of steps 2 through 6 proportional to the initial value of n.

A memoized version of the factorial function follows:

function factorial (n is a non-negative integer)
    if n is 0 then
        return 1 [by the convention that 0! = 1]
    else if n is in lookup-table then
        return lookup-table-value-for-n
    else
        let x = factorial(n – 1) times n [recursively invoke factorial
                                         with the parameter 1 less than n]
        store x in lookup-table in the nth slot [remember the result of n! for later]
        return x
    end if
end function

In this particular example, if factorial is first invoked with 5, and then invoked later with any value less than or equal to five, those return values will also have been memoized, since factorial will have been called recursively with the values 5, 4, 3, 2, 1, and 0, and the return values for each of those will have been stored. If it is then called with a number greater than 5, such as 7, only 2 recursive calls will be made (7 and 6), and the value for 5! will have been stored from the previous call. In this way, memoization allows a function to become more time-efficient the more often it is called, thus resulting in eventual overall speed-up.

Other considerations

[edit]

Functional programming

[edit]

Memoization is heavily used in compilers for functional programming languages, which often use call by name evaluation strategy. To avoid overhead with calculating argument values, compilers for these languages heavily use auxiliary functions called thunks to compute the argument values, and memoize these functions to avoid repeated calculations.

Automatic memoization

[edit]

While memoization may be added to functions internally and explicitly by a computer programmer in much the same way the above memoized version of factorial is implemented, referentially transparent functions may also be automatically memoized externally.[1] The techniques employed by Peter Norvig have application not only in Common Lisp (the language in which his paper demonstrated automatic memoization), but also in various other programming languages. Applications of automatic memoization have also been formally explored in the study of term rewriting[4] and artificial intelligence.[5]

In programming languages where functions are first-class objects (such as Lua, Python, or Perl[6]), automatic memoization can be implemented by replacing (at run-time) a function with its calculated value once a value has been calculated for a given set of parameters. The function that does this value-for-function-object replacement can generically wrap any referentially transparent function. Consider the following pseudocode (where it is assumed that functions are first-class values):

function memoized-call (F is a function object parameter)
    if F has no attached array values then
        allocate an associative array called values;
        attach values to F;
    end if;

    if F.values[arguments] is empty then
        F.values[arguments] = F(arguments);
    end if;

    return F.values[arguments];
end function

In order to call an automatically memoized version of factorial using the above strategy, rather than calling factorial directly, code invokes memoized-call(factorial)(n). Each such call first checks to see if a holder array has been allocated to store results, and if not, attaches that array. If no entry exists at the position values[arguments] (where arguments are used as the key of the associative array), a real call is made to factorial with the supplied arguments. Finally, the entry in the array at the key position is returned to the caller.

The above strategy requires explicit wrapping at each call to a function that is to be memoized. In those languages that allow closures, memoization can be effected implicitly via a functor factory that returns a wrapped memoized function object in a decorator pattern. In pseudocode, this can be expressed as follows:

function construct-memoized-functor (F is a function object parameter)
    allocate a function object called memoized-version;

    let memoized-version(arguments) be
        if self has no attached array values then [self is a reference to this object]
            allocate an associative array called values;
            attach values to self;
        end if;

        if self.values[arguments] is empty then
            self.values[arguments] = F(arguments);
        end if;

        return self.values[arguments];
    end let;

    return memoized-version;
end function

Rather than call factorial, a new function object memfact is created as follows:

 memfact = construct-memoized-functor(factorial)

The above example assumes that the function factorial has already been defined before the call to construct-memoized-functor is made. From this point forward, memfact(n) is called whenever the factorial of n is desired. In languages such as Lua, more sophisticated techniques exist which allow a function to be replaced by a new function with the same name, which would permit:

 factorial = construct-memoized-functor(factorial)

Essentially, such techniques involve attaching the original function object to the created functor and forwarding calls to the original function being memoized via an alias when a call to the actual function is required (to avoid endless recursion), as illustrated below:

function construct-memoized-functor (F is a function object parameter)
    allocate a function object called memoized-version;

    let memoized-version(arguments) be
        if self has no attached array values then [self is a reference to this object]
            allocate an associative array called values;
            attach values to self;
            allocate a new function object called alias;
            attach alias to self; [for later ability to invoke F indirectly]
            self.alias = F;
        end if;

        if self.values[arguments] is empty then
            self.values[arguments] = self.alias(arguments); [not a direct call to F]
        end if;

        return self.values[arguments];
    end let;

    return memoized-version;
end function

(Note: Some of the steps shown above may be implicitly managed by the implementation language and are provided for illustration.)

Parsers

[edit]

When a top-down parser tries to parse an ambiguous input with respect to an ambiguous context-free grammar (CFG), it may need an exponential number of steps (with respect to the length of the input) to try all alternatives of the CFG in order to produce all possible parse trees. This eventually would require exponential memory space. Memoization was explored as a parsing strategy in 1991 by Peter Norvig, who demonstrated that an algorithm similar to the use of dynamic programming and state-sets in Earley's algorithm (1970), and tables in the CYK algorithm of Cocke, Younger and Kasami, could be generated by introducing automatic memoization to a simple backtracking recursive descent parser to solve the problem of exponential time complexity.[1] The basic idea in Norvig's approach is that when a parser is applied to the input, the result is stored in a memotable for subsequent reuse if the same parser is ever reapplied to the same input.

Richard Frost and Barbara Szydlowski also used memoization to reduce the exponential time complexity of parser combinators, describing the result as a memoizing purely functional top-down backtracking language processor.[7] Frost showed that basic memoized parser combinators can be used as building blocks to construct complex parsers as executable specifications of CFGs.[8][9]

Memoization was again explored in the context of parsing in 1995 by Mark Johnson and Jochen D?rre.[10][11] In 2002, it was examined in considerable depth by Bryan Ford in the form called packrat parsing.[12]

In 2007, Frost, Hafiz and Callaghan[citation needed] described a top-down parsing algorithm that uses memoization for refraining redundant computations to accommodate any form of ambiguous CFG in polynomial time (Θ(n4) for left-recursive grammars and Θ(n3) for non left-recursive grammars). Their top-down parsing algorithm also requires polynomial space for potentially exponential ambiguous parse trees by 'compact representation' and 'local ambiguities grouping'. Their compact representation is comparable with Tomita's compact representation of bottom-up parsing.[13] Their use of memoization is not only limited to retrieving the previously computed results when a parser is applied to a same input position repeatedly (which is essential for polynomial time requirement); it is specialized to perform the following additional tasks:

  • The memoization process (which could be viewed as a ‘wrapper’ around any parser execution) accommodates an ever-growing direct left-recursive parse by imposing depth restrictions with respect to input length and current input position.
  • The algorithm's memo-table ‘lookup’ procedure also determines the reusability of a saved result by comparing the saved result's computational context with the parser's current context. This contextual comparison is the key to accommodate indirect (or hidden) left-recursion.
  • When performing a successful lookup in a memotable, instead of returning the complete result-set, the process only returns the references of the actual result and eventually speeds up the overall computation.
  • During updating the memotable, the memoization process groups the (potentially exponential) ambiguous results and ensures the polynomial space requirement.

Frost, Hafiz and Callaghan also described the implementation of the algorithm in PADL’08[citation needed] as a set of higher-order functions (called parser combinators) in Haskell, which enables the construction of directly executable specifications of CFGs as language processors. The importance of their polynomial algorithm's power to accommodate ‘any form of ambiguous CFG’ with top-down parsing is vital with respect to the syntax and semantics analysis during natural language processing. The X-SAIGA site has more about the algorithm and implementation details.

While Norvig increased the power of the parser through memoization, the augmented parser was still as time complex as Earley's algorithm, which demonstrates a case of the use of memoization for something other than speed optimization. Johnson and D?rre[11] demonstrate another such non-speed related application of memoization: the use of memoization to delay linguistic constraint resolution to a point in a parse where sufficient information has been accumulated to resolve those constraints. By contrast, in the speed optimization application of memoization, Ford demonstrated that memoization could guarantee that parsing expression grammars could parse in linear time even those languages that resulted in worst-case backtracking behavior.[12]

Consider the following grammar:

S → (A c) | (B d)
A → X (a|b)
B → X b
X → x [X]

(Notation note: In the above example, the production S → (A c) | (B d) reads: "An S is either an A followed by a c or a B followed by a d." The production X → x [X] reads "An X is an x followed by an optional X.")

This grammar generates one of the following three variations of string: xac, xbc, or xbd (where x here is understood to mean one or more x's.) Next, consider how this grammar, used as a parse specification, might effect a top-down, left-right parse of the string xxxxxbd:

The rule A will recognize xxxxxb (by first descending into X to recognize one x, and again descending into X until all the x's are consumed, and then recognizing the b), and then return to S, and fail to recognize a c. The next clause of S will then descend into B, which in turn again descends into X and recognizes the x's by means of many recursive calls to X, and then a b, and returns to S and finally recognizes a d.

The key concept here is inherent in the phrase again descends into X. The process of looking forward, failing, backing up, and then retrying the next alternative is known in parsing as backtracking, and it is primarily backtracking that presents opportunities for memoization in parsing. Consider a function RuleAcceptsSomeInput(Rule, Position, Input), where the parameters are as follows:

  • Rule is the name of the rule under consideration.
  • Position is the offset currently under consideration in the input.
  • Input is the input under consideration.

Let the return value of the function RuleAcceptsSomeInput be the length of the input accepted by Rule, or 0 if that rule does not accept any input at that offset in the string. In a backtracking scenario with such memoization, the parsing process is as follows:

When the rule A descends into X at offset 0, it memoizes the length 5 against that position and the rule X. After having failed at d, B then, rather than descending again into X, queries the position 0 against rule X in the memoization engine, and is returned a length of 5, thus saving having to actually descend again into X, and carries on as if it had descended into X as many times as before.

In the above example, one or many descents into X may occur, allowing for strings such as xxxxxxxxxxxxxxxxbd. In fact, there may be any number of x's before the b. While the call to S must recursively descend into X as many times as there are x's, B will never have to descend into X at all, since the return value of RuleAcceptsSomeInput(X, 0, xxxxxxxxxxxxxxxxbd) will be 16 (in this particular case).

Those parsers that make use of syntactic predicates are also able to memoize the results of predicate parses, as well, thereby reducing such constructions as:

S → (A)? A
A → /* some rule */

to one descent into A.

If a parser builds a parse tree during a parse, it must memoize not only the length of the input that matches at some offset against a given rule, but also must store the sub-tree that is generated by that rule at that offset in the input, since subsequent calls to the rule by the parser will not actually descend and rebuild that tree. For the same reason, memoized parser algorithms that generate calls to external code (sometimes called a semantic action routine) when a rule matches must use some scheme to ensure that such rules are invoked in a predictable order.

Since, for any given backtracking or syntactic predicate capable parser not every grammar will need backtracking or predicate checks, the overhead of storing each rule's parse results against every offset in the input (and storing the parse tree if the parsing process does that implicitly) may actually slow down a parser. This effect can be mitigated by explicit selection of those rules the parser will memoize.[14]

See also

[edit]

References

[edit]
  1. ^ a b c Norvig, Peter (1991). "Techniques for Automatic Memoization with Applications to Context-Free Parsing". Computational Linguistics. 17 (1): 91–98.
  2. ^ Warren, David S. (2025-08-05). "Memoing for logic programs". Communications of the ACM. 35 (3): 93–111. doi:10.1145/131295.131299. ISSN 0001-0782.
  3. ^ Michie, Donald (1968). "'Memo' Functions and Machine Learning" (PDF). Nature. 218 (5136): 19–22. Bibcode:1968Natur.218...19M. doi:10.1038/218019a0. S2CID 4265138.
  4. ^ Hoffman, Berthold (1992). "Term Rewriting with Sharing and Memo?zation". In Kirchner, H.; Levi, G. (eds.). Algebraic and Logic Programming: Third International Conference, Proceedings, Volterra, Italy, 2–4 September 1992. Lecture Notes in Computer Science. Vol. 632. Berlin: Springer. pp. 128–142. doi:10.1007/BFb0013824. ISBN 978-3-540-55873-6.
  5. ^ Mayfield, James; et al. (1995). "Using Automatic Memoization as a Software Engineering Tool in Real-World AI Systems" (PDF). Proceedings of the Eleventh IEEE Conference on Artificial Intelligence for Applications (CAIA '95). pp. 87–93. doi:10.1109/CAIA.1995.378786. hdl:11603/12722. ISBN 0-8186-7070-3. S2CID 8963326.
  6. ^ "Bricolage: Memoization".
  7. ^ Frost, Richard; Szydlowski, Barbara (1996). "Memoizing Purely Functional Top-Down Backtracking Language Processors". Sci. Comput. Program. 27 (3): 263–288. doi:10.1016/0167-6423(96)00014-7.
  8. ^ Frost, Richard (1994). "Using Memoization to Achieve Polynomial Complexity of Purely Functional Executable Specifications of Non-Deterministic Top-Down Parsers". SIGPLAN Notices. 29 (4): 23–30. doi:10.1145/181761.181764. S2CID 10616505.
  9. ^ Frost, Richard (2003). "Monadic Memoization towards Correctness-Preserving Reduction of Search". Canadian Conference on AI 2003. Lecture Notes in Computer Science. Vol. 2671. pp. 66–80. doi:10.1007/3-540-44886-1_8. ISBN 978-3-540-40300-5.
  10. ^ Johnson, Mark (1995). "Memoization of Top-Down Parsing". Computational Linguistics. 21 (3): 405–417. arXiv:cmp-lg/9504016. Bibcode:1995cmp.lg....4016J.
  11. ^ a b Johnson, Mark & D?rre, Jochen (1995). "Memoization of Coroutined Constraints". Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. Cambridge, Massachusetts. arXiv:cmp-lg/9504028.{{cite book}}: CS1 maint: location missing publisher (link)
  12. ^ a b Ford, Bryan (2002). Packrat Parsing: a Practical Linear-Time Algorithm with Backtracking (Master’s thesis). Massachusetts Institute of Technology. hdl:1721.1/87310.
  13. ^ Tomita, Masaru (1985). Efficient Parsing for Natural Language. Boston: Kluwer. ISBN 0-89838-202-5.
  14. ^ Acar, Umut A.; et al. (2003). "Selective Memoization". Proceedings of the 30th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, 15–17 January 2003. Vol. 38. New Orleans, Louisiana. pp. 14–25. arXiv:1106.0447. doi:10.1145/640128.604133. {{cite book}}: |journal= ignored (help)CS1 maint: location missing publisher (link)
[edit]
Examples of memoization in various programming languages
孕妇手麻是什么原因引起的 姑妈的老公叫什么 宝宝不爱喝水有什么好的办法吗 指甲竖纹是什么原因 卵巢畸胎瘤是什么病
煮红枣为什么有白色的漂浮物 头油是什么原因引起的 卧蚕是什么 泌乳素是什么意思 h是什么元素
何首乌长什么样子 女生学什么专业好 神母是什么病 什么水果最贵 防晒什么时候涂
小孩头疼吃什么药 摩羯座是什么象星座 什么情况需要查凝血 车迟国的三个妖怪分别是什么 维生素a中毒是什么症状
布衣蔬食是什么意思hcv9jop3ns8r.cn 开团什么意思hcv8jop0ns8r.cn 清江鱼是什么鱼hcv8jop4ns1r.cn 深圳居住证有什么用hcv9jop0ns9r.cn 火什么银花hcv7jop9ns0r.cn
什么茶好喝hcv8jop6ns5r.cn 头秃了一块是什么原因creativexi.com 2.4号是什么星座hcv9jop3ns7r.cn 重度脂肪肝吃什么药hcv9jop0ns9r.cn 异口同声什么意思hcv9jop1ns0r.cn
管科是什么专业hcv9jop5ns0r.cn 腰椎间盘突出压迫神经吃什么药hcv9jop2ns4r.cn 吃什么补白细胞hcv9jop3ns3r.cn 淋巴细胞百分比高是什么意思hcv9jop0ns8r.cn 2050年是什么年hcv8jop5ns2r.cn
为什么明星整牙那么快hcv8jop5ns8r.cn 验孕棒什么时候测最准xscnpatent.com 热依扎是什么民族hcv9jop7ns2r.cn 麦的部首是什么hcv8jop6ns5r.cn 白血球低吃什么补得快hcv9jop4ns5r.cn
百度